On the Method of Lagrange Multipliers
|
|
- Beverly Conley
- 6 years ago
- Views:
Transcription
1 On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should hopefully demystify the method of lagrange multipliers to some extent, and help you understand why and when this method works. The Lagrange dual function Generally, an optimization problem in the standard form is given by: minimize f 0 (x) subject to f i (x) 0, i = 1,...,m h i (x) = 0, i = 1,...,p, (1) with variable x R n. We assume its domain D = m i=0 domf i p domh i is nonempty, and denote the optimal value of (1) by p. We use dom to denote the domain of a function. As an example, the general norm minimization with equality constraints that we discussed in class is a special case of (1) where: f 0 (x) = (1/2) Ax b 2 h i (x) = c T i x d i, i = 1,...,p, (2) and there are no inequality constraints (i.e. there are no f i (x) i = 1,...,m). We simply write the p equality constraints in the matrix form as Cx d = 0. The basic idea in Lagrangian duality is to take the constraints in (1) into account by augmenting the objective function with a weighted sum of the constraint functions. We define the Lagrangian L : R n R m R p R associated with the problem (1) as L(x,λ,ν) = f 0 (x)+ λ i f i (x)+ ν i h i (x), with doml = D R m R p. We refer to λ i as the Lagrange multiplier associated with the ith inequality constraint f i (x) 0; similarly we refer to ν i as the Lagrange multiplier associated with the ith equality constraint h i (x) = 0. The vectors λ and ν are called the dual variables or Lagrange multiplier vectors associated with the problem (1). 1
2 We define the Lagrange dual function (or just dual function) g : R m R p R as the minimum value of the Lagrangian over x, for λ R m, ν R p, ( ) g(λ,ν) = minl(x,λ,ν) = min f 0 (x)+ λ i f i (x)+ ν i h i (x). x D x D When the Lagrangian is unbounded below in x, the dual function takes on the value. Since the dual function is the pointwise minimum of a family of affine functions of (λ,ν), it is always concave and hence, we can always find its maximum. Lower bounds on optimal value The dual function yields lower bounds on the optimal value p of the problem (1). For any λ 0 and any ν we have g(λ,ν) p. (3) This important property is easily verified. Suppose x is a feasible point for the problem (1), i.e., f i ( x) 0 and h i ( x) = 0, and λ 0. Then we have λ i f i ( x)+ ν i h i ( x) 0, since each term in the first sum is nonpositive, and each term in the second sum is zero, and therefore L( x,λ,ν) = f 0 ( x)+ λ i f i ( x)+ ν i h i ( x) f 0 ( x). Hence g(λ,ν) = min L(x,λ,ν) L( x,λ,ν) f 0( x). x D Since g(λ,ν) f 0 ( x) holds for every feasible point x, the inequality (3) follows. The lower bound (3) is illustrated in figure 1, for a simple problem with x R and one inequality constraint. The inequality (3) holds, but is vacuous, when g(λ,ν) =. The dual function givesanontrivial lower boundonp onlywhenλ 0and(λ,ν) domg, i.e., g(λ,ν) >. We refer to a pair (λ,ν) with λ 0 and (λ,ν) domg as dual feasible. Linear approximation interpretation The Lagrangian and lower bound property can be given a simple interpretation, based on a linear approximation of the indicator functions of the sets {0} and R +. We first rewrite the original problem (1) as an unconstrained problem, minimize f 0 (x)+ m I (f i (x))+ p I 0 (h i (x)), (4) where I : R R is the indicator function for the nonpositive reals, { 0 u 0 I (u) = u > 0, 2
3 x Figure 1: Lower bound from a dual feasible point. The solid curve shows the objective function f 0, and the dashed curve shows the constraint function f 1. The feasible set is the interval [ 0.46, 0.46], which is indicated by the two dotted vertical lines. The optimal point and value are x = 0.46, p = 1.54 (shown as a circle). The dotted curves show L(x,λ) for λ = 0.1, 0.2,...,1.0. Each of these has a minimum value smaller than p, since on the feasible set (and for λ 0) we have L(x,λ) f 0 (x). 3
4 and similarly, I 0 is the indicator function of {0}. In the formulation (4), the function I (u) can be interpreted as expressing our irritation or displeasure associated with a constraint function value u = f i (x): It is zero if f i (x) 0, and infinite if f i (x) > 0. In a similar way, I 0 (u) gives our displeasure for an equality constraint value u = h i (x). We can think of I as a brick wall or infinitely hard displeasure function; our displeasure rises from zero to infinite as f i (x) transitions from nonpositive to positive. Now suppose in theformulation (4) we replace thefunction I (u) with the linear function λ i u, where λ i 0, and the function I 0 (u) with ν i u. The objective becomes the Lagrangian function L(x,λ,ν), and the dual function value g(λ,ν) is the optimal value of the problem minimize L(x,λ,ν) = f 0 (x)+ m λ i f i (x)+ p ν i h i (x). (5) Inthisformulation, weuse alinear or soft displeasure functioninplaceofi andi 0. Foran inequality constraint, our displeasure is zero when f i (x) = 0, and is positive when f i (x) > 0 (assuming λ i > 0); our displeasure grows as the constraint becomes more violated. Unlike the original formulation, in which any nonpositive value of f i (x) is acceptable, in the soft formulation we actually derive pleasure from constraints that have margin, i.e., from f i (x) < 0. Clearly the approximation of the indicator function I (u) with a linear function λ i u is rather poor. But the linear function is at least an underestimator of the indicator function. Since λ i u I (u) and ν i u I 0 (u) for all u, we see immediately that the dual function yields a lower bound on the optimal value of the original problem. The Lagrange dual problem For each pair (λ,ν) with λ 0, the Lagrange dual function gives us a lower bound on the optimal value p of the optimization problem (1). Thus we have a lower bound that depends on some parameters λ, ν. A natural question is: What is the best lower bound that can be obtained from the Lagrange dual function? This leads to the optimization problem maximize g(λ, ν) subject to λ 0. (6) This problem is called the Lagrange dual problem associated with the problem (1). In this context the original problem (1) is sometimes called the primal problem. The term dual feasible, to describe a pair (λ,ν) with λ 0 and g(λ,ν) >, now makes sense. It means, as the name implies, that (λ,ν) is feasible for the dual problem (6). We refer to (λ,ν ) as dual optimal or optimal Lagrange multipliers if they are optimal for the problem (6). The Lagrange dual problem (6) is always a convex optimization problem, since the objective to be maximized is concave and the constraint is convex. Therefore, we can always solve this problem. 4
5 Weak duality The optimal value of the Lagrange dual problem, which we denote d, is, by definition, the best lower bound on p that can be obtained from the Lagrange dual function. In particular, we have the simple but important inequality d p, (7) which holds for any general optimization problem. This property is called weak duality. The weak duality inequality (7) holds when d and p are infinite. For example, if the primal problem is unbounded below, so that p =, we must have d =, i.e., the Lagrange dual problem is infeasible. Conversely, if the dual problem is unbounded above, so that d =, we must have p =, i.e., the primal problem is infeasible. We refer to the difference p d as the optimal duality gap of the original problem, since it gives the gap between the optimal value of the primal problem and the best (i.e., greatest) lower bound on it that can be obtained from the Lagrange dual function. The optimal duality gap is always nonnegative. The bound (7) can sometimes be used to find a lower bound on the optimal value of a problem that is difficult to solve, since the dual problem is always convex and can be solved efficiently, to find d. Strong duality If the equality d = p (8) holds, i.e., the optimal duality gapis zero, then we say that strong duality holds. This means that the best bound that can be obtained from the Lagrange dual function is tight. Strong duality does not, in general, hold. But if the primal problem (1) is convex, i.e., of the form minimize f 0 (x) subject to f i (x) 0, i = 1,...,m, (9) Cx = d, with f 0,...,f m convex functions, we usually (but not always) have strong duality. There are many results that establish conditions on the problem, beyond convexity, under which strong duality holds. These conditions are called constraint qualifications. The optimization problems we will be solving in EE263 are always convex, and since we don t work with inequality constraints in this course, we need not worry about constraint qualifications. In other words, strong duality always holds in EE263, except for the case where the constraint Cx = d cannot be satisfied for any x D, which means the problem is infeasible and cannot be solved. Now suppose that the primal and dual optimal values are attained and equal (so, in particular, strong duality holds). Let x be a primal optimal and (λ,ν ) be a dual optimal 5
6 point. This means that f 0 (x ) = g(λ,ν ) ( ) = inf f 0 (x)+ λ x i f i(x)+ νi h i(x) f 0 (x )+ λ i f i(x )+ νi h i(x ) f 0 (x ). The first line states that the optimal duality gap is zero, and the second line is the definition of the dual function. The third line follows since the infimum of the Lagrangian over x is less than or equal to its value at x = x. The last inequality follows from λ i 0, f i(x ) 0, i = 1,...,m, and h i (x ) = 0, i = 1,...,p. We conclude that the two inequalities in this chain hold with equality. We can draw several interesting conclusions from this. For example, since the inequality in the third line is an equality, we conclude that x minimizes L(x,λ,ν ) over x. (The Lagrangian L(x,λ,ν ) can have other minimizers; x is simply a minimizer.) Another important conclusion is that m λ if i (x ) = 0. Since each term in this sum is nonpositive, we conclude that λ i f i(x ) = 0, i = 1,...,m. (10) This condition is known as complementary slackness; it holds for any primal optimal x and any dual optimal (λ,ν ) (when strong duality holds). We can express the complementary slackness condition as λ i > 0 = f i(x ) = 0, or, equivalently, f i (x ) < 0 = λ i = 0. Roughly speaking, this means the ith optimal Lagrange multiplier is zero unless the ith constraint is active at the optimum. KKT conditions We now assume that the functions f 0,...,f m,h 1,...,h p are differentiable (and therefore have open domains), but we make no assumptions yet about convexity. As above, let x and (λ,ν ) be any primal and dual optimal points with zero duality gap. Since x minimizes L(x,λ,ν ) over x, it follows that its gradient must vanish at x, i.e., f 0 (x )+ λ i f i(x )+ νi h i(x ) = 0. 6
7 Thus we have f i (x ) 0, i = 1,...,m h i (x ) = 0, i = 1,...,p λ i 0, i = 1,...,m λ i f i(x ) = 0, i = 1,...,m f 0 (x )+ m λ i f i(x )+ p νi h i(x ) = 0, (11) which are called the Karush-Kuhn-Tucker (KKT) conditions. To summarize, for any optimization problem with differentiable objective and constraint functions for which strong duality obtains, any pair of primal and dual optimal points must satisfy the KKT conditions (11). When the primal problem is convex, the KKT conditions are also sufficient for the points to be primal and dual optimal. In other words, if f i are convex and h i are affine, and x, λ, ν are any points that satisfy the KKT conditions f i ( x) 0, i = 1,...,m h i ( x) = 0, i = 1,...,p λ i 0, i = 1,...,m λ i f i ( x) = 0, i = 1,...,m f 0 ( x)+ m λi f i ( x)+ p ν i h i ( x) = 0, then x and ( λ, ν) are primal and dual optimal, with zero duality gap. What we did with the method of Lagrange multipliers in class, was precisely to form and solve the KKT system. To see this, note that the KKT conditions for our general norm minimization problem would be: C x d = ν L( x, ν) = 0, f 0 ( x)+ p ν i h i ( x) = x L( x, ν) = 0, which is exactly the system of equations we got from the method of Lagrange multipliers. 7
Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationLagrangian Duality and Convex Optimization
Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?
More informationEE/AA 578, Univ of Washington, Fall Duality
7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationThe Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:
HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationTutorial on Convex Optimization for Engineers Part II
Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationLecture Notes on Support Vector Machine
Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationIntroduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationKarush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationConvex Optimization Overview (cnt d)
Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize
More informationsubject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More informationTutorial on Convex Optimization: Part II
Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationMachine Learning. Lecture 6: Support Vector Machine. Feng Li.
Machine Learning Lecture 6: Support Vector Machine Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Warm Up 2 / 80 Warm Up (Contd.)
More informationConvex Optimization and SVM
Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More informationHomework Set #6 - Solutions
EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationLinear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)
Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training
More informationLagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark
Lagrangian Duality Richard Lusby Department of Management Engineering Technical University of Denmark Today s Topics (jg Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality R Lusby (42111) Lagrangian
More informationSupport Vector Machines for Regression
COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationOutline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution
Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationCONVEX OPTIMIZATION, DUALITY, AND THEIR APPLICATION TO SUPPORT VECTOR MACHINES. Contents 1. Introduction 1 2. Convex Sets
CONVEX OPTIMIZATION, DUALITY, AND THEIR APPLICATION TO SUPPORT VECTOR MACHINES DANIEL HENDRYCKS Abstract. This paper develops the fundamentals of convex optimization and applies them to Support Vector
More informationLinear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)
Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training
More informationIntroduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem
CS 189 Introduction to Machine Learning Spring 2018 Note 22 1 Duality As we have seen in our discussion of kernels, ridge regression can be viewed in two ways: (1) an optimization problem over the weights
More informationRate Control in Communication Networks
From Models to Algorithms Department of Computer Science & Engineering The Chinese University of Hong Kong February 29, 2008 Outline Preliminaries 1 Preliminaries Convex Optimization TCP Congestion Control
More informationA Tutorial on Convex Optimization II: Duality and Interior Point Methods
A Tutorial on Convex Optimization II: Duality and Interior Point Methods Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California 94304 email: hhindi@parc.com Abstract In recent years, convex
More informationThe Karush-Kuhn-Tucker conditions
Chapter 6 The Karush-Kuhn-Tucker conditions 6.1 Introduction In this chapter we derive the first order necessary condition known as Karush-Kuhn-Tucker (KKT) conditions. To this aim we introduce the alternative
More informationLinear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)
Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training
More informationConvex Optimization and Support Vector Machine
Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We
More informationSupport vector machines
Support vector machines Guillaume Obozinski Ecole des Ponts - ParisTech SOCN course 2014 SVM, kernel methods and multiclass 1/23 Outline 1 Constrained optimization, Lagrangian duality and KKT 2 Support
More informationSupport Vector Machine
Andrea Passerini passerini@disi.unitn.it Machine Learning Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)
More informationFIN 550 Exam answers. A. Every unconstrained problem has at least one interior solution.
FIN 0 Exam answers Phil Dybvig December 3, 0. True-False points A. Every unconstrained problem has at least one interior solution. False. An unconstrained problem may not have any solution at all. For
More informationIntroduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur
Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module - 5 Lecture - 22 SVM: The Dual Formulation Good morning.
More information1. f(β) 0 (that is, β is a feasible point for the constraints)
xvi 2. The lasso for linear models 2.10 Bibliographic notes Appendix Convex optimization with constraints In this Appendix we present an overview of convex optimization concepts that are particularly useful
More informationLagrangian Duality for Dummies
Lagrangian Duality for Dummies David Knowles November 13, 2010 We want to solve the following optimisation problem: f 0 () (1) such that f i () 0 i 1,..., m (2) For now we do not need to assume conveity.
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationMathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7
Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum
More informationJanuary 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions
Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationLagrangian Duality. Evelien van der Hurk. DTU Management Engineering
Lagrangian Duality Evelien van der Hurk DTU Management Engineering Topics Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality 2 DTU Management Engineering 42111: Static and Dynamic Optimization
More informationPrimal-dual Subgradient Method for Convex Problems with Functional Constraints
Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual
More informationECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,
ECE 788 - Optimization for wireless networks Final Please provide clear and complete answers. PART I: Questions - Q.. Discuss an iterative algorithm that converges to the solution of the problem minimize
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationSubgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More information4. Convex optimization problems (part 1: general)
EE/AA 578, Univ of Washington, Fall 2016 4. Convex optimization problems (part 1: general) optimization problem in standard form convex optimization problems quasiconvex optimization 4 1 Optimization problem
More informationConvex Optimization in Communications and Signal Processing
Convex Optimization in Communications and Signal Processing Prof. Dr.-Ing. Wolfgang Gerstacker 1 University of Erlangen-Nürnberg Institute for Digital Communications National Technical University of Ukraine,
More information10-725/ Optimization Midterm Exam
10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationCS711008Z Algorithm Design and Analysis
.. CS711008Z Algorithm Design and Analysis Lecture 9. Algorithm design technique: Linear programming and duality Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China
More informationIn applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem
1 Conve Analsis Main references: Vandenberghe UCLA): EECS236C - Optimiation methods for large scale sstems, http://www.seas.ucla.edu/ vandenbe/ee236c.html Parikh and Bod, Proimal algorithms, slides and
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationDuality. Geoff Gordon & Ryan Tibshirani Optimization /
Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More information9. Dual decomposition and dual algorithms
EE 546, Univ of Washington, Spring 2016 9. Dual decomposition and dual algorithms dual gradient ascent example: network rate control dual decomposition and the proximal gradient method examples with simple
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More information