Primal/Dual Decomposition Methods

Size: px
Start display at page:

Download "Primal/Dual Decomposition Methods"

Transcription

1 Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC Convex Optimization Fall , HKUST, Hong Kong

2 Outline of Lecture Subgradients Subgradient methods Primal decomposition Dual decomposition Summary (Acknowledgement to Stephen Boyd for material for this lecture.) Daniel P. Palomar 1

3 Gradient and First-Order Approximation Recall basic inequality for convex differentiable f: f (y) f (x) + f (x) T (y x) y First-order approximation of f at x is a global underestimator, a supporting hyperplane. What if f is not differentiable? The answer is given by the concept of subgradient. Daniel P. Palomar 2

4 Subgradient of a Function g is a subgradient of f (not necessarily convex) at x if f (y) f (x) + g T (y x) y Daniel P. Palomar 3

5 Subgradient of a Function g is a subgradient if and only if f (x) + g T (y x) is a global affine underestimator of f. If f is convex and differentiable, then f (x) is a subgradient of f at x. Subgradients come up in several contexts: algorithms for nondifferentiable convex optimization convex analysis, e.g., optimality conditions and duality for nondifferentiable problems. Daniel P. Palomar 4

6 Example of Subgradient Consider f = max {f 1, f 2 } with f 1 and f 2 convex and differentiable Subgradient at point x: If f 1 (x) > f 2 (x), the subgradient is unique g = f 1 (x). If f 2 (x) > f 1 (x), the subgradient is unique g = f 2 (x). If f 1 (x) = f 2 (x), the subgradients form an interval [ f 1 (x), f 2 (x)]. Daniel P. Palomar 5

7 Subdifferential The set of all subgradients of f at x is called the subdifferential of f at x and is denoted f (x). f (x) is a closed convex set (can be empty). If f is convex: f (x) is nonempty for x relint dom f if f is differentiable at x, then f (x) = { f (x)} (i.e., a singleton). If f (x) = {g}, then f is differentiable at x and g = f (x). Daniel P. Palomar 6

8 Consider f (x) = x : Example of Subdifferential Daniel P. Palomar 7

9 Subgradient Calculus Weak subgradient calculus: formulas for finding one subgradient g f (x). Strong subgradient calculus: formulas for finding the whole subdifferential f (x), i.e., all subgradients of f at x. Many algorithms for nondifferentiable convex optimization require only one subgradient, so weak calculus suffices. Some algorithms and optimality conditions need the whole subdifferential. In practice, if you can compute f (x), you can usually compute a subgradient g f (x). Daniel P. Palomar 8

10 Some Basic Rules (From now on, we will assume that f is convex and x relint dom f.) f (x) = { f (x)} if f is differentiable at x Scaling: (αf) = α f (assuming α > 0) Addition: (f 1 + f 2 ) = f 1 + f 2 (addition of sets) Affine transformation of variables: g (x) = A T f (Ax + b) Finite pointwise maximum: if f = max i f i, then if g (x) = f (Ax + b), then f (x) = Co { f i f i (x) = f (x)}, i.e., convex hull of union of subdifferentials of active functions at x. Daniel P. Palomar 9

11 Optimality Conditions: Unconstrained Case Recall that for f convex and differentiable, x minimizes f (x) if and only if: 0 = f (x ). The generalization to nondifferentiable convex f is 0 f (x ). Proof. By definition: f (y) f (x ) + 0 T (y x ) y. Daniel P. Palomar 10

12 Example: Piecewise Linear Maximization We want to minimize f (x) = max i { a T i x + b i }. x minimizes f (x) 0 f (x ) = Co { a i a T i x + b i = f (x ) } there is a λ with λ 0, 1 T λ = 1, λ i a i = 0 where λ i = 0 if a T i x + b i < f (x ). Interestingly, these are exactly the KKT conditions for the problem in epigraph form: minimize t,x t subject to a T i x + b i t, i = 1,, m. i Daniel P. Palomar 11

13 Optimality Conditions: Constrained Case minimize x f 0 (x) subject to f i (x) 0, i = 1,, m where each f i is convex, defined on R n (hence subdifferentiable) and strict feasibility holds (Slater s condition). The KKT necessary and sufficient conditions are f i (x ) 0, λ i 0 0 f 0 (x ) + m λ i f i (x ) i=1 λ i f i (x ) = 0 (complementary slackness). Daniel P. Palomar 12

14 Numerical Methods for Nondifferentiable Problems In R, we can always use the bisection method for nondifferentiable functions to reduce the interval (uncertainty) by half at each step. Can we generalize this to R n? The problem is that R n is not ordered, as opposed to R. The answer is: yes. These methods are called localization methods: cutting-plane methods (date back to the 1960s in the Russian literature): the uncertainty set is a polyhedron ellipsoid method (goes back to the 1970s in the Russian literature): the uncertainty set is an ellipsoid. Another convenient method is the subgradient method. Daniel P. Palomar 13

15 Subgradient Method Subgradient method is a simple algorithm (looks similar to a gradient method) to minimize a nondifferentiable convex function f: x (k+1) = x (k) α k g (k) where x (k) is the kth iterate g (k) is any subgradient of f at x (k) α k > 0 is the kth stepsize Note that it is not a descent method (unlike a gradient method), so we need to keep track of the best point so far: f (k) best = ( min f x (i)). i=1,,k Daniel P. Palomar 14

16 Different stepsize methods: Stepsize Rules constant stepsize: α k = α constant step length: α k = γ/ g (k) (so x (k+1) x (k) 2 = γ) square summable but not summable: stepsizes satisfying α 2 k <, α k = k=1 k=1 nonsummable diminishing: stepsizes satisfying lim α k = 0, k k=1 α k =. Daniel P. Palomar 15

17 Convergence Results Under some technical conditions (boundedness of optimum value, boundedness of subgradients by G), the limiting value of the subgradient method f = lim k f (k) best satisfies: constant stepsize: f f G 2 α/2, i.e., converges to G 2 α/2- suboptimal (converges to f if f differentiable and α small enough) constant step length: suboptimal f f Gγ/2, i.e., converges to Gγ/2- diminishing stepsize rule: f = f, i.e., converges. Daniel P. Palomar 16

18 Example: Piecewise Linear Minimization Consider the following nondifferentiable optimization problem: minimize f (x) = max i { a T i x + b i }. To find a subgradient, simply choose an index j for which and take g = a j. The subgradient update is a T j x + b j = max i { a T i x + b i } x (k+1) = x (k) α k a (k) j. Daniel P. Palomar 17

19 Example: Piecewise Linear Minimization (II) Problem instance with n = 20 variables, m = 100 terms, f 1.1 Constant step length, first 100 iterations, f (k) f : Daniel P. Palomar 18

20 Example: Piecewise Linear Minimization (III) Constant step length, f (k) best f : Daniel P. Palomar 19

21 Example: Piecewise Linear Minimization (IV) Diminishing stepsize rule (α k stepsize rule (α k = 1/k): = 0.1/ k) and square summable Daniel P. Palomar 20

22 Projected Subgradient Method for Constrained Optimization Consider the following constrained optimization problem: minimize x f (x) subject to x X. The projected subgradient method guarantees that each update produces a feasible point via a projection: [ x (k+1) = x (k) α k g (k)] X where [ ] X denotes projection onto the convex set X. Daniel P. Palomar 21

23 Decomposition Methods The idea of a decomposition method is to solve a problem by solving smaller subproblems coordinated by a master problem. Different reasons to use a decomposition method: to allow the resolution of a problem otherwise unsolvable for memory reasons (useful in areas such as biology or image processing) to speed up the resolution of the problem via parallel computation to solve the problem in a distributed way (desirable for some wireless networks) to derive nice, insightful, and efficient numerical algorithms as alternative to the use of general-purpose interior-point methods. Daniel P. Palomar 22

24 Decomposition Methods (II) In some (few) fortunate cases, problems decouple naturally: minimize f 1 (x 1 ) + f 2 (x 2 ) x subject to x 1 X 1, x 2 X 2. We can solve for x 1 and x 2 separately (in parallel). Even if they are solved sequentially, there is still the advantage of memory and of speed (if the computational effort is superlinear in problem size). Daniel P. Palomar 23

25 Decomposition Methods (III) In general, however, problems do not decouple so easily and that s when can resort to decomposition methods. There are two main types of decompostion methods: primal decomposition: deals with complicating variables that couple the subproblems the primal master problem controls directly the resources dual decomposition: deals with complicating constraints that couple the subproblems the dual master problem controls the prices of the resources. Daniel P. Palomar 24

26 Primal Decomposition Consider the following problem with a coupling variable: minimize x,y f 1 (x 1, y) + f 2 (x 2, y) subject to x 1 X 1, x 2 X 2 y Y. y is the complicating variable or coupling variable. When y is fixed the problem is separable in x 1 and x 2 and then decouples into two subproblems that can be solved independently. x 1 and x 2 are local or private variables; and y can be interpreted as a global or public variable that serves as an interface or boundary variable between the two subproblems. Daniel P. Palomar 25

27 Primal Decomposition (II) For a given fixed y we define the subproblems: subproblem 1 : minimize x1 X 1 f 1 (x 1, y) subproblem 2 : minimize x2 X 2 f 2 (x 2, y) with optimal values f 1 (y) and f 2 (y). Since min x,y f min y min x f, it follows that the original problem is equivalent to the master primal problem: minimize y Y f 1 (y) + f 2 (y). Daniel P. Palomar 26

28 Primal Decomposition (III) Observations: subproblems can be solved independently for a given y we don t have a closed-form expression for each function fi its corresponding gradient or subgradient (differentiable?) (y) and instead, to evaluate each function fi to solve an optimization problem (y) at some point y we need interestingly, we can easily obtain a subgradient of fi when we evaluate the function. (y) for free Daniel P. Palomar 27

29 Primal Decomposition: Solving the Master Problem If the original problem is convex, so is master problem. To solve the master problem, we can use different methods such as bisection (if y is scalar) gradient or Newton method (if f i differentiable) subgradient, cutting-plane, or ellipsoid method. The subgradient method is very simple and allows itself to distributed implementation; however, its converge is slow in practice. Projected subgradient method: y(k + 1) = [y(k) α k (s 1 (k) + s 2 (k))] Y where s i (k) is a subgradient of f i (y (k)). Daniel P. Palomar 28

30 Primal Decomposition Algorithm repeat 1. Solve the suproblems: Find x 1 (k) X 1 that minimizes f 1 (x 1 (k), y(k)), and a subgradient s 1 (k) f 1 (y(k)). Find x 2 (k) X 2 that minimizes f 2 (x 2 (k), y(k)), and a subgradient s 2 (k) f 2 (y(k)). 2. Update the complicating variable to minimize the primal master subproblem: 3. k = k + 1 y(k + 1) = [y(k) α k (s 1 (k) + s 2 (k))] Y. Daniel P. Palomar 29

31 Primal Decomposition: Subgradients Lemma: Let f (y) be the optimal value of the convex problem minimize x f 0 (x) subject to h i (x) y i, i = 1,, m. A subgradient of f (y) is λ (y). Proof. The Lagrangian is L (x, λ) = f 0 (x) + λ T (h (x) y). Then, f (y 0 ) = f 0 (x (y 0 )) = g (λ (y 0 )) L (x, λ (y 0 )) = f 0 (x) + λ T (h (x) y 0 ) = f 0 (x) + λ T (h (x) y) + λ T (y y 0 ) f 0 (x) + λ T (y y 0 ) y Daniel P. Palomar 30

32 where the last inequality holds for any x such that h (x) y. In particular, f (y 0 ) min h(x) y f 0 (x) + λ T (y y 0 ) = f (y) + λ T (y y 0 ) or f (y) f (y 0 ) λ T (y y 0 ). Lemma: Let f (y) be the optimal value of the convex problem minimize x f 0 (x, y) subject to h i (x) y i, i = 1,, m. A subgradient of f (y) is (s 0 (x (y), y) λ (y)). Daniel P. Palomar 31

33 Dual Decomposition Consider the following problem with a coupling constraint: minimize f 1 (x 1 ) + f 2 (x 2 ) x subject to x 1 X 1, x 2 X 2 h 1 (x 1 ) + h 2 (x 2 ) h 0 h 1 (x 1 ) + h 2 (x 2 ) h 0 is the complicating or coupling constraint. If the coupling constraint is relaxed with a Lagrange multiplier λ, then the problem decouples into two subproblems that can be solved independently. x 1 and x 2 are local or private variables; and λ can be interpreted as a global or public price variable that serves as an interface or boundary variable between the two subproblems. Daniel P. Palomar 32

34 Dual Decomposition (II) The partial Lagrangian after relaxing the coupling constraint is: L (x, λ) = f 1 (x 1 ) + f 2 (x 2 ) + λ T (h 1 (x 1 ) + h 2 (x 2 ) h 0 ). The dual function is g (λ) = inf x X L (x, λ) } = inf x1 X 1 {f 1 (x 1 ) + λ T h 1 (x 1 ) } + inf x2 X 2 {f 2 (x 2 ) + λ T h 2 (x 2 ) λ T h 0 which clearly decouples. Daniel P. Palomar 33

35 Dual Decomposition (III) For a given fixed λ we define the subproblems: subproblem 1 : minimize x1 X 1 f 1 (x 1 ) + λ T h 1 (x 1 ) subproblem 2 : minimize x2 X 2 f 2 (x 2 ) + λ T h 2 (x 2 ) with optimal values g 1 (λ) and g 2 (λ). From strong duality, the original problem is equivalent to the master dual problem: maximize λ 0 g 1 (λ) + g 2 (λ) λ T h 0. Daniel P. Palomar 34

36 Dual Decomposition (IV) Observations: subproblems can be solved independently for a given λ we don t have a closed-form expression for each function g i (λ) and its corresponding gradient or subgradient (differentiable?) instead, to evaluate each function g i (λ) at some point λ we need to solve an optimization problem as in the primal case, we can easily obtain a subgradient of g i (λ) for free when we evaluate the function. Daniel P. Palomar 35

37 Dual Decomposition: Solving the Master Problem The dual master problem is always convex regardless of the original problem. However, we still need convexity to have strong duality (under some constraint qualifications like Slater s condition). To solve the master problem, we can use different methods such as bisection (if λ is scalar) gradient or Newton method (if g i differentiable) subgradient, cutting-plane, or ellipsoid method. Projected subgradient method: λ(k + 1) = [λ(k) + α k (s 1 (k) + s 2 (k) h 0 )] + where s i (k) is a subgradient of g i (λ (k)). Daniel P. Palomar 36

38 Dual Decomposition Algorithm repeat 1. Solve the suproblems: Find x 1 (k) X 1 that minimizes f 1 (x 1 (k)) + λ(k) T h 1 (x 1 (k)), and a subgradient s 1 (k) g 1 (λ(k)). Find x 2 (k) X 2 that minimizes f 2 (x 2 (k)) + λ(k) T h 2 (x 2 (k)), and a subgradient s 2 (k) g 2 (λ(k)). 2. Update the complicating variable to minimize the primal master subproblem: 3. k = k + 1 λ(k + 1) = [λ(k) + α k (s 1 (k) + s 2 (k) h 0 )] +. Daniel P. Palomar 37

39 Dual Decomposition: Subgradients Lemma: Let g (λ) be the dual function corresponding to the problem minimize x f 0 (x) subject to h i (x) 0, i = 1,, m. A subgradient of g (λ) is h (x (λ)). Proof. The Lagrangian is L (x, λ) = f 0 (x) + λ T h (x) and the dual function g (λ) = inf x L (x, λ). Then, g (λ) = inf x f 0 (x) + λ T h (x) f 0 (x (λ 0 )) + λ T h (x (λ 0 )) = f 0 (x (λ 0 )) + λ T 0 h (x (λ 0 )) + (λ λ 0 ) T h (x (λ 0 )) = g (λ 0 ) + (λ λ 0 ) T h (x (λ 0 )). Daniel P. Palomar 38

40 Summary We have described the concept of subgradient as a generalization of gradient. We have considered subgradient methods as formally similar to gradient methods. Finally, we have derived the two basic decomposition techniques: primal decomposition dual decomposition. Daniel P. Palomar 39

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic

More information

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients strong and weak subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE364b, Stanford University Basic inequality recall basic inequality

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Sum-Power Iterative Watefilling Algorithm

Sum-Power Iterative Watefilling Algorithm Sum-Power Iterative Watefilling Algorithm Daniel P. Palomar Hong Kong University of Science and Technolgy (HKUST) ELEC547 - Convex Optimization Fall 2009-10, HKUST, Hong Kong November 11, 2009 Outline

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Motivation Subgradient Method Stochastic Subgradient Method. Convex Optimization. Lecture 15 - Gradient Descent in Machine Learning

Motivation Subgradient Method Stochastic Subgradient Method. Convex Optimization. Lecture 15 - Gradient Descent in Machine Learning Convex Optimization Lecture 15 - Gradient Descent in Machine Learning Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 21 Today s Lecture 1 Motivation 2 Subgradient Method 3 Stochastic

More information

Lagrangian Duality and Convex Optimization

Lagrangian Duality and Convex Optimization Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem: HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST)

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST) Mini Problems Daniel P. Palomar Hong Kong University of Science and Technolgy (HKUST) ELEC547 - Convex Optimization Fall 2009-10, HKUST, Hong Kong Outline of Lecture Introduction Matrix games Bilinear

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Lecture 1: Background on Convex Analysis

Lecture 1: Background on Convex Analysis Lecture 1: Background on Convex Analysis John Duchi PCMI 2016 Outline I Convex sets 1.1 Definitions and examples 2.2 Basic properties 3.3 Projections onto convex sets 4.4 Separating and supporting hyperplanes

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Stochastic Subgradient Method

Stochastic Subgradient Method Stochastic Subgradient Method Lingjie Weng, Yutian Chen Bren School of Information and Computer Science UC Irvine Subgradient Recall basic inequality for convex differentiable f : f y f x + f x T (y x)

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Primal-dual Subgradient Method for Convex Problems with Functional Constraints Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Distributed Optimization: Analysis and Synthesis via Circuits

Distributed Optimization: Analysis and Synthesis via Circuits Distributed Optimization: Analysis and Synthesis via Circuits Stephen Boyd Prof. S. Boyd, EE364b, Stanford University Outline canonical form for distributed convex optimization circuit intepretation primal

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

Lecture 6: September 17

Lecture 6: September 17 10-725/36-725: Convex Optimization Fall 2015 Lecturer: Ryan Tibshirani Lecture 6: September 17 Scribes: Scribes: Wenjun Wang, Satwik Kottur, Zhiding Yu Note: LaTeX template courtesy of UC Berkeley EECS

More information

Lecture 16: October 22

Lecture 16: October 22 0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Subgradient Methods. Stephen Boyd (with help from Jaehyun Park) Notes for EE364b, Stanford University, Spring

Subgradient Methods. Stephen Boyd (with help from Jaehyun Park) Notes for EE364b, Stanford University, Spring Subgradient Methods Stephen Boyd (with help from Jaehyun Park) Notes for EE364b, Stanford University, Spring 2013 14 May 2014; based on notes from January 2007 Contents 1 Introduction 3 2 Basic subgradient

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

1. f(β) 0 (that is, β is a feasible point for the constraints)

1. f(β) 0 (that is, β is a feasible point for the constraints) xvi 2. The lasso for linear models 2.10 Bibliographic notes Appendix Convex optimization with constraints In this Appendix we present an overview of convex optimization concepts that are particularly useful

More information

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 30 Notation f : H R { } is a closed proper convex function domf := {x R n

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b, ECE 788 - Optimization for wireless networks Final Please provide clear and complete answers. PART I: Questions - Q.. Discuss an iterative algorithm that converges to the solution of the problem minimize

More information

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms Convex Optimization Theory Athena Scientific, 2009 by Dimitri P. Bertsekas Massachusetts Institute of Technology Supplementary Chapter 6 on Convex Optimization Algorithms This chapter aims to supplement

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

5. Subgradient method

5. Subgradient method L. Vandenberghe EE236C (Spring 2016) 5. Subgradient method subgradient method convergence analysis optimal step size when f is known alternating projections optimality 5-1 Subgradient method to minimize

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Math 273a: Optimization Lagrange Duality

Math 273a: Optimization Lagrange Duality Math 273a: Optimization Lagrange Duality Instructor: Wotao Yin Department of Mathematics, UCLA Winter 2015 online discussions on piazza.com Gradient descent / forward Euler assume function f is proper

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Subgradient Method. Ryan Tibshirani Convex Optimization

Subgradient Method. Ryan Tibshirani Convex Optimization Subgradient Method Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last last time: gradient descent min x f(x) for f convex and differentiable, dom(f) = R n. Gradient descent: choose initial

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Lecture Notes on Support Vector Machine

Lecture Notes on Support Vector Machine Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is

More information

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

Dual Decomposition.

Dual Decomposition. 1/34 Dual Decomposition http://bicmr.pku.edu.cn/~wenzw/opt-2017-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/34 1 Conjugate function 2 introduction:

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions LECTURE 12 LECTURE OUTLINE Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions Reading: Section 5.4 All figures are courtesy of Athena

More information