Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:


 Ursula Whitehead
 1 years ago
 Views:
Transcription
1 CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through path L(s), with utility PSfrag replacements U s (x s ). there are L links, each one with capacity c l, shared by S(l) sources. motivation: Network Utility Maximization (NUM) problem application: Internet congestion control basic unconstrained minimization x 1 x 3 c 1 c 2 c 3 x 2 c 4 x 4 duality theory Lagrange dual of NUM, decentralization maximize S s=1 U s(x s ) subject to s S(l) x s c l, l = 1,..., L, optimization variable: x R S (domain of U s is x s 0)
2 when U s (x s ) are concave functions, NUM is a convex optimization problem TCP/AQM congestion control seminal paper [Kelly,Maulloo,Tan 98] important foundation of data networks: structure (APP/TCP/IP/MAC/PHY) layered different parts of NUM relate to different layers e.g., varying R, c would involve IP and PHY layers PSfrag replacements more later... a large class of congestion control mechanisms can be interpreted as distribured algorithms that solve NUM and its dual x s λ l AQM: RED, DropTail,... NUM useful as framework to understand network equilibrium and dynamics also a central problem in economics TCP: Reno, Vegas,... x s : source rate, updated by TCP (Transmission Control Protocol) λ l : link congestion measure, or price, updated by AQM (Active Queue Management) e.g., TCP Reno uses packet loss as congestion measure, TCP Vegas uses queueing delay
3 Topics we cover Unconstrained minimization basic unconstrained minimization, gradient descent duality theory Lagrange dual problem and its properties strong duality and Slater s condition interpretations of duality KKT optimality conditions theorems of alternatives Lagrange dual of NUM, decentralization minimize f(x) f : R n R, convex, differentiable minimizing sequence: x (k), k f(x (k) ) f optimality condition: f(x ) = 0 set of nonlinear equations; usually no analytical solution refs: Boyd & Vandenberghe, Convex Optimization, boyd/cvxbook. Chapters 5, and Bertsekas, Nonlinear Programming, Chapter 1. more generally, if 2 f(x) mi, m > 0, then f(x) f 1 2m f(x) 2... yields stopping criterion (if you know m)
4 Descent methods given starting point x dom f repeat 1. Compute a search direction v 2. Line search. Choose step size t > 0 3. Update. x := x + tv until stopping criterion is satisfied descent method: f(x (k+1) ) < f(x (k) ) since f convex, v must be a descent direction: f(x (k) ) T v (k) < 0 choose v (descent direction) v (k) = f(x (k) ) (gradient) v (k) = H (k) f(x (k) ), H (k) = H (k)t 0 v (k) = 2 f(x (k) ) 1 f(x (k) ) (Newton s) choose t (step size) exact line search: t = argmin s>0 f(x + sv) backtracking line search (picks step size that reduces f enough ) Sfrag replacements f(x) = c 2 c 1 f(x) = c 1 v (k) constant step size x x (k) f(x (k) ) (main idea extends to nondifferentiable f using subgradients)
5 Gradient descent method Example given starting point x dom f repeat 1. Compute search direction v = f(x) 2. Line search. Choose step size t 3. Update. x := x + tv until stopping criterion is satisfied converges with exact or backtracking line search. a simple (but conservative) convergence condition for t: suppose f has bounded curvature 2 f(x) MI; there exists z line segment [x, y] s.t. f(x + tv) f(x) = f(x) T (tv) (tv)t ( 2 f(z))(tv) t f t2 M f 2, then t t2 M < 0, or t < 2/M, guarantees descent. 2 9 minimize 1 2 (x2 1 + Mx 2 2) where M > 0, optimal point is x = 0. use exact line search; start at x (0) = (M, 1) (condition number κ = max{m, 1/M} determines convergence rate) 2 10
6 notes: convergence can be very slow, depends critically on condition number Lagrangian standard form optimization problem Newton s method (and variations) have much better convergence properties minimize subject to f 0 (x) f i (x) 0, i = 1,..., m however, in networking problems, gradient descent proves very useful, since main concern here is decentralization... (details later) optimal value p, domain D called primal problem (in context of duality) (for now) we don t assume convexity how about constrained optimization problems? next topic: Lagrangian duality Lagrangian L : R n+m R L(x, λ) = f 0 (x) + λ 1 f 1 (x) + + λ m f m (x) λ i called Lagrange multipliers or dual variables objective is augmented with weighted sum of constraint functions
7 Lagrange dual function (Lagrange) dual function g : R m R { } g(λ) = inf x = inf x L(x, λ) (f 0(x) + λ 1 f 1 (x) + + λ m f m (x)) minimum of augmented cost as function of weights can be for some λ g is concave (even if f i not convex!) example: LP minimize subject to L(x, λ) = c T x + c T x a T i x b i 0, i = 1,..., m m λ i (a T i x b i ) i=1 = b T λ + (A T λ + c) T x { b hence g(λ) = T λ if A T λ + c = 0 otherwise 2 13 Lower bound property if λ 0 and x is primal feasible, then g(λ) f 0 (x) proof: if f i (x) 0 and λ i 0, f 0 (x) f 0 (x) + i ( inf z = g(λ) f 0 (z) + i λ i f i (x) λ i f i (z) f 0 (x) g(λ) is called the duality gap of (primal feasible) x and λ 0 minimize over primal feasible x to get, for any λ 0, g(λ) p λ R m is dual feasible if λ 0 and g(λ) > dual feasible points yield lower bounds on optimal value! 2 14 )
8 Lagrange dual problem Strong duality & constraint qualifications let s find best lower bound on p : maximize g(λ) subject to λ 0 called (Lagrange) dual problem (associated with primal problem) always a convex problem, even if primal isn t! optimal value denoted d we always have d p (called weak duality) p d is optimal duality gap for convex problems, we (usually) have strong duality: d = p (strong duality does not hold, in general, for nonconvex problems) when strong duality holds, dual optimal λ serves as certificate of optimality for primal optimal point x many conditions or constraint qualifications guarantee strong duality for convex problems Slater s condition: if primal problem is strictly feasible (and convex), i.e., there exists x relint D with then we have p = d f i (x) < 0, i = 1,..., m
9 Dual of linear program Dual of quadratic program (primal) LP minimize subject to c T x Ax b primal QP minimize x T P x subject to Ax b we assume P 0 for simplicity n variables, m inequality constraints dual of LP is (after making implicit equality constraints explicit) maximize b T λ subject to A T λ + c = 0 λ 0 dual of LP is also an LP (in std LP format) m variables, n equality constraints, m nonnegativity contraints for LP we have strong duality except in one (pathological) case: primal and dual both infeasible (p = +, d = ) 2 17 Lagrangian is L(x, λ) = x T P x + λ T (Ax b) x L(x, λ) = 0 yields x = (1/2)P 1 A T λ, hence dual function is g(λ) = (1/4)λ T AP 1 A T λ b T λ concave quadratic function all λ 0 are dual feasible dual of QP is maximize (1/4)λ T AP 1 A T λ b T λ subject to λ 0... another QP 2 18
10 Minmax & saddlepoint interpretation can express primal and dual problems in a more symmetric form: sup λ 0 ( f 0 (x) + ) m λ i f i (x) = i=1 so p = inf x sup λ 0 L(x, λ). { f0 (x) f i (x) 0, + otherwise, also by definition, d = sup λ 0 inf x L(x, λ). weak duality can be expressed as sup inf λ 0 x L(x, λ) inf x sup L(x, λ) λ 0 strong duality when equality holds. means can switch the order of minimization over x and maximization over λ 0. if x and λ are primal and dual optimal and strong duality holds, they form a saddlepoint for the Lagrangian (converse also true). Economic (price) interpretation minimize f 0 (x) subject to f i (x) 0, i = 1,..., m. f 0 (x) is cost of operating firm at operating condition x; constraints give resource limits. suppose: can violate f i (x) 0 by paying additional cost of λ i (in dollars per unit violation), i.e., incur cost λ i f i (x) if f i (x) > 0 can sell unused portion of ith constraint at same price, i.e., gain profit λ i f i (x) if f i (x) < 0 total cost to firm to operate at x at constraint prices λ i : m L(x, λ) = f 0 (x) + λ i f i (x) i=
11 interpretations: dual function: g(λ) = inf x L(x, λ) is optimal cost to firm at constraint prices λ i weak duality: cost can be lowered if firm allowed to pay for violated constraints (and get paid for nontight ones) duality gap: advantage to firm under this scenario strong duality: λ give prices for which firm has no advantage in being allowed to violate constraints... dual optimal λ problem called shadow prices for original Geometric interpretation of duality consider set A = { (u, t) R m+1 x f i (x) u i, f 0 (x) t } A is convex if f i are for λ 0, g(λ) = inf { [ λ 1 ] T [ u t ] [ u t ] } A PSfrag replacements t A t + λ T u = g(λ) g(λ)» λ 1 u
12 (Idea of) proof of Slater s theorem problem convex, strictly feasible = strong duality PSfrag replacements t Slater s condition: there exists (u, t) A with u 0; implies that all supporting hyperplanes at (0, p ) are nonvertical (µ 0 > 0) p A» 1 λ u (0, p ) A supporting hyperplane at (0, p ): (u, t) A = µ 0 (t p ) + µ T u 0 µ 0 0, µ 0, (µ, µ 0 ) 0 strong duality supp. hyperpl. with µ 0 > 0: for λ = µ/µ 0, we have p t + λ T u (t, u) A p g(λ )
13 Sensitivity analysis via duality define p (u) as the optimal value of PSfrag replacements minimize subject to p (u) 0 f 0 (x) f i (x) u i, i = 1,..., m 0 u epi p p (0) λ T u λ gives lower bound on p (u): p (u) p m i=1 λ i u i if λ i large: u i < 0 greatly increases p if λ i small: u i > 0 does not decrease p too much if p (u) is differentiable, λ i = p (0) u i λ i is sensitivity of p w.r.t. ith constraint 2 25 Complementary slackness suppose x, λ are primal, dual feasible with zero duality gap (hence, they are primal, dual optimal) f 0 (x ) = g(λ ) ( = inf x f 0 (x ) + f 0 (x) + ) m λ i f i (x) i=1 m λ i f i (x ) i=1 hence we have m i=1 λ i f i(x ) = 0, and so λ i f i (x ) = 0, i = 1,..., m called complementary slackness condition ith constraint inactive at optimum = λ i = 0 λ i > 0 at optimum = ith constraint active at optimum 2 26
14 suppose KKT optimality conditions f i are differentiable x, λ are (primal, dual) optimal, with zero duality gap so if x, λ are (primal, dual) optimal, with zero duality gap, they satisfy f i (x ) 0 λ i 0 λ i f i(x ) = 0 f 0 (x ) + i λ i f i(x ) = 0 the KarushKuhnTucker (KKT) optimality conditions by complementary slackness we have f 0 (x ) + i λ i f i (x ) = inf x ( f 0 (x) + i λ i f i (x) ) conversely, if the problem is convex and x, λ satisfy KKT, then they are (primal, dual) optimal i.e., x minimizes L(x, λ ) therefore f 0 (x ) + i λ i f i (x ) =
15 Generalized inequalities dual problem minimize subject to f 0 (x) f i (x) Ki 0, i = 1,..., L maximize g(λ 1,..., λ L ) subject to λ i K i 0, i = 1,..., L Ki are generalized inequalities on R m i f i : R n R m i are K i convex Lagrangian L : R n R m 1 R m L R, L(x, λ 1,..., λ L ) = f 0 (x) + λ T 1 f 1 (x) + + λ T Lf L (x) dual function g(λ 1,..., λ L ) = inf x ( f0 (x) + λ T 1 f 1 (x) + + λ T Lf L (x) ) weak duality: d p always strong duality: d = p usually Slater condition: if primal is strictly feasible, i.e., x relint D : f i (x) Ki 0, i = 1,..., L then d = p λ i dual feasible if λ i K i 0, g(λ 1,..., λ L ) > lower bound property: if x primal feasible and (λ 1,..., λ L ) is dual feasible, then g(λ 1,..., λ L ) f 0 (x) (hence, g(λ 1,..., λ L ) p )
16 Example: semidefinite programming minimize c T x subject to F 0 + x 1 F x n F n 0 Lagrangian (multiplier Z = Z T R m m ) L(x, Z) = c T x + Tr Z(F 0 + x 1 F x n F n ) dual function g(z) = ( inf c T x + Tr Z(F 0 + x 1 F x n F n ) ) x = { Tr F0 Z if Tr F i Z + c i = 0, i = 1,..., n otherwise dual problem maximize subject to Tr F 0 Z Tr F i Z + c i = 0, i = 1,..., n Z = Z T 0 strong duality holds if there exists x with F 0 + x 1 F x n F n Theorem of alternatives apply Lagrange duality theory to the feasibility problem: f 1,..., f m convex with dom f i = R n exactly one of the following is true: 1. there exists x with f i (x) < 0, i = 1,..., m 2. there exists λ 0 with λ 0, g(λ) = inf x (λ 1f 1 (x) + + λ m f m (x)) 0 called alternatives use in practice: λ that satisfies 2nd condition proves f i (x) < 0 is infeasible example: f i (x) = a T i x b i 1. there exists x with Ax b 2. there exists λ 0, λ 0, b T λ 0, A T λ =
17 proof. from convex duality: primal problem minimize subject to (variables x, t) dual problem t f i (x) t, i = 1,..., m maximize g(λ) subject to λ 0 1 T λ = 1 Slater s condition is satisfied, hence p = d 1st alternative: p < 0 2nd alternative: p Dual of NUM back to the Network Utility Maximization problem we saw before, primal: maximize Us (x s ) subject to Rx c, where R is the routing matrix. Lagrangian: L(x, λ) = s = s D(λ) = max L(x, λ) = x s U s (x s ) + ( λ l c l ) R ls x s l s [ ( ) ] U s (x s ) R ls λ l x s + c l λ l l l max L s (x s, λ s ) + x s l c l λ l where λ s = l R lsλ l is endtoend path price for source s. 2 34
18 dual: minimize subject to s max x s 0 (U s (x s ) λ s x s ) + l c lλ l λ 0, dualbased distributed algorithm one way to solve iteratively: use (a variation on) gradient method for dual additivity of total utility and flow constraints lead to decomposition into individual source terms: a form of dual decomposition this decomposition is called horizontal : users in network each source x s can keep its utility private across each source only needs to know λ s to solve subproblem max xs 0(U s (x s ) λ s x s ) optimal prices λ serve as a coordinating signal that align individual (selfish) optimality with social optimality source algorithm: singlevariable optimization x s(λ s ) = argmax [ U s (x s ) ( l R ls λ l (t))x s ], s. if U s are strictly concave, twice continously differentiable, x (λ s ) = U 1 (λ s ) called demand function in microeconomics more later
19 link algorithm: if U s strictly concave, twice continously differentiable, D λ l (λ(k)) = c l s R ls x s (λ(k)) use gradient projection method: [ ( λ l (k+1) = λ l (k) α(k) c l s R ls x s (λ(k)))] +, l α(k) is step size, certain choices guarantee λ(k) λ and x (λ(k)) x. interpretation: balancing supply and demand through pricing link update is decentralized, each link using only local information a model for sourcelink dynamics 2 37
Solving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 00 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms  A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationOn the Method of Lagrange Multipliers
On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should
More informationA Tutorial on Convex Optimization II: Duality and Interior Point Methods
A Tutorial on Convex Optimization II: Duality and Interior Point Methods Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California 94304 email: hhindi@parc.com Abstract In recent years, convex
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of WisconsinMadison May 2014 Wright (UWMadison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationNetwork Utility Maximization With Nonconcave Utilities Using SumofSquares Method
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 25, 2005 MoC4.6 Network Utility Maximization With Nonconcave Utilities
More informationTutorial on Convex Optimization for Engineers Part II
Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D98684 Ilmenau, Germany jens.steinwandt@tuilmenau.de
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationUtility, Fairness and Rate Allocation
Utility, Fairness and Rate Allocation Laila Daniel and Krishnan Narayanan 11th March 2013 Outline of the talk A rate allocation example Fairness criteria and their formulation as utilities Convex optimization
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework  MC/MC Constrained optimization
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of nonrigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Postdoctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationMathematical Foundations 1 Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7
Mathematical Foundations  Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum
More informationOptimization Theory. Lectures 46
Optimization Theory Lectures 46 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the nonnegative orthant {x0ú n x$0} Existence of a maximum:
More informationDual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36725
Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10725/36725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications
ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications Professor M. Chiang Electrical Engineering Department, Princeton University February
More informationConvex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470  Convex Optimization Fall 201718, HKUST, Hong Kong Outline of Lecture Definition convex function Examples
More informationLecture 15 Newton Method and SelfConcordance. October 23, 2008
Newton Method and SelfConcordance October 23, 2008 Outline Lecture 15 Selfconcordance Notion Selfconcordant Functions Operations Preserving Selfconcordance Properties of Selfconcordant Functions Implications
More informationPrimalDual InteriorPoint Methods
PrimalDual InteriorPoint Methods Lecturer: Aarti Singh Coinstructor: Pradeep Ravikumar Convex Optimization 10725/36725 Outline Today: Primaldual interiorpoint method Special case: linear programming
More informationIE 5531 Midterm #2 Solutions
IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,
More informationPrimalDual InteriorPoint Methods for Linear Programming based on Newton s Method
PrimalDual InteriorPoint Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationALADIN An Algorithm for Distributed NonConvex Optimization and Control
ALADIN An Algorithm for Distributed NonConvex Optimization and Control Boris Houska, Yuning Jiang, Janick Frasch, Rien Quirynen, Dimitris Kouzoupis, Moritz Diehl ShanghaiTech University, University of
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29092012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationSTATIC LECTURE 4: CONSTRAINED OPTIMIZATION II  KUHN TUCKER THEORY
STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II  KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):
More informationConvex Optimization Overview (cnt d)
Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a realvalued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationCongestion Control. Topics
Congestion Control Topics Congestion control what & why? Current congestion control algorithms TCP and UDP Ideal congestion control Resource allocation Distributed algorithms Relation current algorithms
More informationLagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark
Lagrangian Duality Richard Lusby Department of Management Engineering Technical University of Denmark Today s Topics (jg Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality R Lusby (42111) Lagrangian
More informationSupport Vector Machines and Kernel Methods
2018 CS420 Machine Learning, Lecture 3 Hangout from Prof. Andrew Ng. http://cs229.stanford.edu/notes/cs229notes3.pdf Support Vector Machines and Kernel Methods Weinan Zhang Shanghai Jiao Tong University
More information10701 Recitation 5 Duality and SVM. Ahmed Hefny
10701 Recitation 5 Duality and SVM Ahmed Hefny Outline Langrangian and Duality The Lagrangian Duality Eamples Support Vector Machines Primal Formulation Dual Formulation Soft Margin and Hinge Loss Lagrangian
More informationDuality. for The New Palgrave Dictionary of Economics, 2nd ed. Lawrence E. Blume
Duality for The New Palgrave Dictionary of Economics, 2nd ed. Lawrence E. Blume Headwords: CONVEXITY, DUALITY, LAGRANGE MULTIPLIERS, PARETO EFFICIENCY, QUASICONCAVITY 1 Introduction The word duality is
More informationLecture Support Vector Machine (SVM) Classifiers
Introduction to Machine Learning Lecturer: Amir Globerson Lecture 6 Fall Semester Scribe: Yishay Mansour 6.1 Support Vector Machine (SVM) Classifiers Classification is one of the most important tasks in
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationSIGNIFICANT increase in amount of private distributed
1 Distributed DC Optimal Power Flow for Radial Networks Through Partial Primal Dual Algorithm Vahid Rasouli Disfani, Student Member, IEEE, Lingling Fan, Senior Member, IEEE, Zhixin Miao, Senior Member,
More informationConditional Gradient (FrankWolfe) Method
Conditional Gradient (FrankWolfe) Method Lecturer: Aarti Singh Coinstructor: Pradeep Ravikumar Convex Optimization 10725/36725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationCongestion Management in a Smart Grid via Shadow Prices
Congestion Management in a Smart Grid via Shadow Prices Benjamin Biegel, Palle Andersen, Jakob Stoustrup, Jan Bendtsen Systems of Systems October 23, 2012 1 This presentation use of distributed Receding
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationPrimalDual InteriorPoint Methods. Ryan Tibshirani Convex Optimization /36725
PrimalDual InteriorPoint Methods Ryan Tibshirani Convex Optimization 10725/36725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...
More informationThe KarushKuhnTucker (KKT) conditions
The KarushKuhnTucker (KKT) conditions In this section, we will give a set of sufficient (and at most times necessary) conditions for a x to be the solution of a given convex optimization problem. These
More informationCHAPTER 12: SHADOW PRICES
Essential Microeconomics  CHAPTER : SHADOW PRICES An intuitive approach: profit maimizing firm with a fied supply of an input Shadow prices 5 Concave maimization problem 7 Constraint qualifications
More informationDuality Theory, Optimality Conditions
5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every
More informationThe KuhnTucker Problem
Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming  The KuhnTucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The KuhnTucker
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More informationLexicographic MaxMin Fairness in a Wireless Ad Hoc Network with Random Access
Lexicographic MaxMin Fairness in a Wireless Ad Hoc Network with Random Access Xin Wang, Koushik Kar, and JongShi Pang Abstract We consider the lexicographic maxmin fair rate control problem at the link
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 151 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationLecture 13: Constrained optimization
20101203 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationAn eventtriggered distributed primaldual algorithm for Network Utility Maximization
An eventtriggered distributed primaldual algorithm for Network Utility Maximization Pu Wan and Michael D. Lemmon Abstract Many problems associated with networked systems can be formulated as network
More informationAnalytic Center CuttingPlane Method
Analytic Center CuttingPlane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cuttingplane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower
More informationOptimization methods
Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on firstorder information,
More informationCONVEX OPTIMIZATION, DUALITY, AND THEIR APPLICATION TO SUPPORT VECTOR MACHINES. Contents 1. Introduction 1 2. Convex Sets
CONVEX OPTIMIZATION, DUALITY, AND THEIR APPLICATION TO SUPPORT VECTOR MACHINES DANIEL HENDRYCKS Abstract. This paper develops the fundamentals of convex optimization and applies them to Support Vector
More informationEC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline
LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization February 12, 2002 Overview The Practical Importance of Duality ffl Review of Convexity ffl A Separating Hyperplane Theorem ffl Definition of the Dual
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationSupport Vector Machines
EE 17/7AT: Optimization Models in Engineering Section 11/1  April 014 Support Vector Machines Lecturer: Arturo Fernandez Scribe: Arturo Fernandez 1 Support Vector Machines Revisited 1.1 Strictly) Separable
More informationAn InteriorPoint Method for Large Scale Network Utility Maximization
An InteriorPoint Method for Large Scale Network Utility Maximization Argyrios Zymnis, Nikolaos Trichakis, Stephen Boyd, and Dan O Neill August 6, 2007 Abstract We describe a specialized truncatednewton
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationNetwork Optimization and Control
Foundations and Trends R in Networking Vol. 2, No. 3 (2007) 271 379 c 2008 S. Shakkottai and R. Srikant DOI: 10.1561/1300000007 Network Optimization and Control Srinivas Shakkottai 1 and R. Srikant 2 1
More information10. Unconstrained minimization
Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method selfconcordant functions implementation
More informationSupport Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs
E0 270 Machine Learning Lecture 5 (Jan 22, 203) Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are a brief summary of the topics covered in
More informationChapter 1: Linear Programming
Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of
More informationDynamic Network Utility Maximization with Delivery Contracts
Dynamic Network Utility Maximization with Delivery Contracts N. Trichakis A. Zymnis S. Boyd August 31, 27 Abstract We consider a multiperiod variation of the network utility maximization problem that
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of WisconsinMadison. IMA, August 2016 Stephen Wright (UWMadison) Linear Programming: Simplex IMA, August 2016
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationNonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6
Nonlinear Programming 3rd Edition Theoretical Solutions Manual Chapter 6 Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts 1 NOTE This manual contains
More information4. The Dual Simplex Method
4. The Dual Simplex Method Javier Larrosa Albert Oliveras Enric RodríguezCarbonell Problem Solving and Constraint Programming (RPAR) Session 4 p.1/34 Basic Idea (1) Algorithm as explained so far known
More informationJørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark.
DUALITY THEORY Jørgen Tind, Department of Statistics and Operations Research, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O, Denmark. Keywords: Duality, Saddle point, Complementary
More informationConcave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0
1 Introduction Concave programming Concave programming is another special case of the general constrained optimization problem max f(x) subject to g(x) 0 in which the objective function f is concave and
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationOptimality Conditions
Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationReview of Optimization Basics
Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a twosettlement structure. The reason for this is that the structure includes two different markets:
More informationLINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
LINEAR CLASSIFIERS Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification, the input
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt2015fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationDeterminant maximization with linear. S. Boyd, L. Vandenberghe, S.P. Wu. Information Systems Laboratory. Stanford University
Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition
More information Wellcharacterized problems, minmax relations, approximate certificates.  LP problems in the standard form, primal and dual linear programs
LPDuality ( Approximation Algorithms by V. Vazirani, Chapter 12)  Wellcharacterized problems, minmax relations, approximate certificates  LP problems in the standard form, primal and dual linear programs
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning ChoJui Hsieh UC Davis Sept 29, 2016 Outline Convex vs Nonconvex Functions Coordinate Descent Gradient Descent Newton s method Stochastic Gradient Descent Numerical Optimization
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING
Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36725
Newton s Method Ryan Tibshirani Convex Optimization 10725/36725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationGaps in Support Vector Optimization
Gaps in Support Vector Optimization Nikolas List 1 (student author), Don Hush 2, Clint Scovel 2, Ingo Steinwart 2 1 Lehrstuhl Mathematik und Informatik, RuhrUniversity Bochum, Germany nlist@lmi.rub.de
More informationNewton s Method. Javier Peña Convex Optimization /36725
Newton s Method Javier Peña Convex Optimization 10725/36725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationMathematical Economics: Lecture 16
Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave
More informationOn the convergence to saddle points of concaveconvex functions, the gradient method and emergence of oscillations
53rd IEEE Conference on Decision and Control December 1517, 214. Los Angeles, California, USA On the convergence to saddle points of concaveconvex functions, the gradient method and emergence of oscillations
More information4. Convex optimization problems (part 1: general)
EE/AA 578, Univ of Washington, Fall 2016 4. Convex optimization problems (part 1: general) optimization problem in standard form convex optimization problems quasiconvex optimization 4 1 Optimization problem
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis
MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis AnnBrith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with
More informationEC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes.
London School of Economics Department of Economics Dr Francesco Nava Offi ce: 32L.3.20 EC400 Math for Microeconomics Syllabus 2016 The course is based on 6 sixty minutes lectures and on 6 ninety minutes
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set LöwnerJohn ellipsoid
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uniklu.ac.at AlpenAdriaUniversität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA
More informationLagrangian Duality for Dummies
Lagrangian Duality for Dummies David Knowles November 13, 2010 We want to solve the following optimisation problem: f 0 () (1) such that f i () 0 i 1,..., m (2) For now we do not need to assume conveity.
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept HardMargin SVM SoftMargin SVM Dual Problems of HardMargin
More informationMS&E 318 (CME 338) LargeScale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) LargeScale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More information