Lecture 14 Barrier method

Size: px
Start display at page:

Download "Lecture 14 Barrier method"

Transcription

1 L. Vandenberghe EE236A (Fall ) Lecture 14 Barrier method centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector method 14 1

2 Centering problem centering problem (with notation and assumptions of page 13 12) minimize f t (x) = tc T x+φ(x) φ(x) = m i=1 log(b i a T i x) is logarithmic barrier of Ax b minimizer x is point x (t) on central path minimizer is m/t-suboptimal solution of LP: c T x (t) p m t barrier method(s): use Newton s method to (approximately) minimize f t, for a sequence of t Barrier method 14 2

3 gradient and Hessian Properties of centering cost function f t (x) = tc+a T d x, 2 f t (x) = A T diag(d x ) 2 A d x = ( 1/(b 1 a T 1x),...,1/(b m a T mx) ) is a positive m-vector (strict) convexity and its consequences (see pages 13 6 and 13 9) 2 f t (x) is positive definite for all x P first order condition: f t (y) > f t (x)+ f t (x) T (y x) for all x,y P with x y x minimizes f t if and only if f t (x) = 0; if minimizer exists, it is unique Barrier method 14 3

4 Lower bound for centering problem centering problem is bounded below if dual LP is strictly feasible: f t (y) tb T z + m logz i +mlogt+m for all y P i=1 here z can be any strictly dual feasible point (A T z +c = 0, z > 0) proof: difference of left- and right-hand sides is m t(c T y +b T z) log(t(b i a T i y)z i ) m i=1 = t(b Ay) T z m log(t(b i a T i y)z i ) m i=1 0 (since u logu 1 0) Barrier method 14 4

5 Newton method for centering problem Newton step for f t at x P = {y Ay < b} x nt = 2 f t (x) 1 f t (x) = 2 φ(x) 1 (tc+ φ(x)) = ( A T diag(d x ) 2 A ) 1( tc+a T d x ) Newton iteration: choose suitable stepsize α and make update x := x+α x nt we will show that Newton method converges if f t is bounded below Barrier method 14 5

6 Outline centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector method

7 Newton decrement the Newton decrement at x P is λ t (x) = ( x T nt 2 ) 1/2 φ(x) x nt = diag(d x )A x nt = x nt x λ t (x) 2 is the directional derivative of f t at x in the direction x nt : λ t (x) 2 = f t (x) T x nt = d dα f t(x+α x nt ) α=0 λ t (x) = 0 if and only if x = x (t) we will use λ t (x) to measure proximity of x to x (t) Barrier method 14 6

8 Dual feasible points near central path on central path (p ): strictly dual feasible point from x = x (t) z (t) = 1 t d x, z i(t) = 1 t(b i a T i x), i = 1,...,m near central path: for x P with λ t (x) < 1, define z = 1 t ( dx +diag(d x ) 2 A x nt ) x nt is the Newton step for f t at x z is strictly dual feasible (see next page); duality gap with x is (b Ax) T z = m+dt xa x nt t ( 1+ λ t(x) m ) m t Barrier method 14 7

9 proof: by definition, Newton step x nt at x satisfies A T diag(d x ) 2 A x nt = tc A T d x therefore z satisfies the equality constraints A T z +c = 0 z > 0 if and only if 1+diag(d x )A x nt > 0 a sufficient condition is λ t (x) = diag(d x )A x nt < 1 bound on duality gap follows from Cauchy-Schwarz inequality Barrier method 14 8

10 Lower bound for centering problem near central path substituting the dual z of p.14 7 in the lower bound of p.14 4 gives f t (y) f t (x) m (w i log(1+w i )) y P i=1 where w i = (a T i x nt)/(b i a T i x) this bound holds if λ t (x) < 1; a simpler bound holds if λ t (x) 0.68: f t (y) f t (x) m w 2 i u log(1 + u) u 2 i=1 0.5 = f t (x) λ t (x) (since u log(1 + u) u 2 for u 0.68) u Barrier method 14 9

11 Outline centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector method

12 Quadratic convergence of Newton s method theorem: if λ t (x) < 1 and x nt is Newton step of f t at x, then x + = x+ x nt P, λ t (x + ) λ t (x) 2 Newton method (with unit stepsize) x (k+1) = x (k) 2 f t (x (k) ) 1 f t (x (k) ) if λ t (x (0) ) 1/2, Newton decrement after k iterations is λ t (x (k) ) (1/2) 2k decreases very quickly: (1/2) 25 = , (1/2) 26 = ,... λ t (x (k) ) very small after a few iterations Barrier method 14 10

13 proof of quadratic convergence result feasibility of x + : follows from λ t (x) = x nt x and result on p.13 7 quadratic convergence: define D = diag(d x ), D + = diag(d x +) λ t (x + ) 2 = D + A x + nt 2 D + A x + nt 2 + (I D 1 + D)DA x nt +D + A x + nt 2 = (I D 1 + D)DA x nt 2 = (I D 1 + D) (I D 1 + D)1 4 = DA x nt 2 = λ t (x) 4 Barrier method 14 11

14 on lines 4 and 6 we used DA x nt = D(b Ax b+ax + ) = (I D 1 + D)1 line 3 follows from A T D + ( D+ A x + nt+(i D 1 + D)DA x nt ) = A T D 2 +A x + nt A T D 2 A x nt +A T D + DA x nt = tc A T D + 1+tc+A T D1+A T D + (I D 1 + D)1 = 0 line 5 follows from ( i y4 i ) ( i y2 i )2 Barrier method 14 12

15 Outline centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector method

16 Short-step barrier method simplifying assumptions a central point x (t 0 ) is given x (t) is computed exactly algorithm: define tolerance ǫ (0, 1) and parameter µ = m starting at t = t 0, repeat until m/t ǫ: compute x (µt) by Newton s method with unit step started at x (t) set t := µt Barrier method 14 13

17 Newton decrement after update of t gradient of f t + at x = x (t) for new value t + = µt: f t +(x) = µtc+a T d x = (µ 1)A T d x Newton decrement for new value t + is λ t +(x) = ( f t +(x) T 2 φ(x) 1 f t +(x) ) 1/2 = (µ 1) ( 1 T B(B T B) 1 B T 1 ) 1/2 (µ 1) m = 1/2 (with B = diag(d x )A) line 3 follows because maximum eigenvalue of B(B T B) 1 B T is one x (t) is in region of quadratic convergence of Newton s method for f µt Barrier method 14 14

18 Iteration complexity Newton iterations per outer iteration: a small constant number of outer iterations: we reach t (k) = µ k t 0 m/ǫ when k log(m/(ǫt 0)) logµ cumulative number of Newton iterations ( ( )) mlog m O ǫt 0 (we used logµ = log(1+1/(2 m)) (log2)/(2 m)) multiply by flops per Newton iteration to get polynomial complexity m dependence is lowest known complexity for interior-point methods Barrier method 14 15

19 Outline centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector method

20 Maximum stepsize to boundary for x P and arbitrary v 0, define σ x (v) = 0 if Av 0 a T i v b i a T i x otherwise max i=1,...,m point x+αv (α 0) is in P if and only if ασ x (v) x + σ 1 v x + v x x+αv P for α [0,1/σ] if σ > 0 α [0, ) if σ = 0 Barrier method 14 16

21 Upper bound on centering cost function arbitrary direction: for x P and arbitrary v 0 if σ = σ x (v) > 0 and α [0,1/σ): f t (x+αv) f t (x)+α f t (x) T v v 2 x (ασ +log(1 ασ)) σ2 if σ = σ x (v) = 0 and α [0, ): f t (x+αv) f t (x)+α f t (x) T v + α2 2 v 2 x on the right-hand sides, v x = (v T 2 f t (x)v) 1/2 Newton direction: for v = x nt, substitute f t (x) T v = v 2 x = λ t (x) 2 Barrier method 14 17

22 proof: define w i = (a T i v)/(b i a T i x) and note that w = v x if σ = max i w i > 0: f t (x+αv) f t (x) α f t (x) T v m = (αw i +log(1 αw i )) i=1 w i >0 w i >0 m i=1 (αw i +log(1 αw i ))+ w i 0 α 2 w 2 i 2 wi 2 (ασ)2 σ2(ασ +log(1 ασ))+ 2 wi 2 σ2(ασ +log(1 ασ)) w i 0 w 2 i σ 2 ( ) if σ = 0, upper bound follows from ( ) Barrier method 14 18

23 Damped Newton iteration x + = x+ 1 1+σ x ( x nt ) x nt theorem: damped Newton iteration at any x P decreases cost f t (x + ) f t (x) λ t (x)+log(1+λ t (x)) graph shows u log(1+u) if λ t (x) 0.5 f t (x + ) f t (x) 0.09 u log(1 + u) u Barrier method 14 19

24 proof: apply upper bounds on page with v = x nt if σ > 0, the value of the upper bound f t (x) αλ t (x) 2 λ t(x) 2 σ 2 (ασ +log(1 ασ)) at α = 1/(1+σ) is f t (x) λ t(x) 2 σ 2 (σ log(1+σ)) f t (x) λ t (x)+log(1+λ t (x)) if σ = 0, value of upper bound at α = 1 is f t (x) αλ t (x) 2 + α2 2 λ t(x) 2 f t (x) λ t(x) 2 2 f t (x) λ t (x)+log(1+λ t (x)) Barrier method 14 20

25 Summary: Newton algorithm for centering centering problem minimize f t (x) = tc T x+φ(x) algorithm given: tolerance δ (0,1), starting point x := x (0) P repeat: 1. compute Newton step x nt at x and Newton decrement λ t (x) 2. if λ t (x) δ, return x 3. otherwise, set x := x+α x nt with α = 1 1+σ x ( x nt ) if λ t (x) > 1/2 α = 1 if λ t (x) 1/2 Barrier method 14 21

26 Convergence theorem: if δ < 1/2 and f t (x) is bounded below, algorithm takes at most f t (x (0) ) min y f t (y) log 2 log 2 (1/δ) iterations (1) proof: combine theorems on pages and if λ t (x (k) ) > 1/2, iteration k decreases the function value by at least λ t (x (k) ) log(1 λ t (x (k) )) 0.09 the first term in (1) bounds the number of iterations with λ t (x) > 1/2 if λ t (x (l) ) 1/2, quadratic convergence yields λ t (x (k) ) δ after k = l+log 2 log 2 (1/δ) iterations Barrier method 14 22

27 Computable bound on #iterations replace unknown min y f t (y) in (1) by the lower bound from page 14 4: f t (x) min y f t (y) V t (x,z) where z is a strictly dual feasible point and V t (x,z) = f t (x)+tb T z = t(b Ax) T z m logz i mlogt m i=1 m log(t(b i a T i x)z i ) m i=1 number of Newton iterations to minimize f t starting at x is bounded by 10.6V t (x,z)+log 2 log 2 (1/δ) Barrier method 14 23

28 Outline centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector method

29 Predictor-corrector methods short-step methods stay in narrow neighborhood of central path, defined by limit on λ t make small, fixed increases t + = µt quite slow in practice predictor-corrector methods select new t using a linear approximation to central path ( predictor ) recenter with new t ( corrector ) can make faster and adaptive increases in t Barrier method 14 24

30 Tangent to central path central path equation [ 0 s (t) ] = [ 0 A T A 0 ][ x (t) z (t) ] + [ c b ] s i(t)z i(t) = 1 t, i = 1,...,m derivatives: ẋ = dx (t)/dt, ṡ = ds /dt, ż = dz (t)/dt satisfy [ 0 ṡ ] = [ 0 A T A 0 ][ ẋ ż ] s i(t)ż i +zi(t)ṡ i = 1 t2, i = 1,...,m tangent direction: defined as x tg = tẋ, s tg = tṡ, z tg = tż Barrier method 14 25

31 Predictor equations with x = x (t), s = s (t), z = z (t) (1/t)diag(s) 2 0 I 0 0 A T I A 0 s tg x tg z tg = z 0 0 (1) equivalent equation (using s i z i = 1/t) I 0 (1/t)diag(z) A T I A 0 s tg x tg z tg = s 0 0 (2) Barrier method 14 26

32 Properties of tangent direction from 2nd and 3rd block in (1): s T tg z tg = 0 take inner product with s on both sides of first block in (1): s T z tg +z T s tg = s T z hence, gap in tangent direction is (s+α s tg ) T (z +α z tg ) = (1 α)s T z take inner product with s tg on both sides of first block in (1): s T tgdiag(s) 2 s tg = tz T s tg similarly, from first block in (2): z T tgdiag(z) 2 z tg = ts T z tg Barrier method 14 27

33 Potential function definition: for strictly primal, dual feasible x, z, define Ψ(x,z) = mlog zt s m m i=1 = mlog ( i z is i )/m ( i z is i ) 1/m log(s i z i ) with s = b Ax = mlog arithmetic mean of z 1s 1,..., z m s m geometric mean of z 1 s 1,..., s m z m properties Ψ(x,z) is nonnegative for all strictly feasible x, z Ψ(x,z) = 0 only if x and z are centered, i.e., for some t > 0, z i s i = 1/t, i = 1,...,m Barrier method 14 28

34 Potential function and proximity to central path for any strictly feasible x, z, with s = b Ax: minimizing t is t = m/s T z Ψ(x,z) = min V t(x,z) (see page 14 23) t>0 ( ) m = min ts T z log(ts i z i ) m t>0 i=1 Ψ as global measure of proximity to central path V t (x,z) bounds the effort to compute x (t), starting at x (page 14 23) Ψ(x, z) bounds centering effort, without imposing a specific t Barrier method 14 29

35 Predictor-corrector method with exact centering simplifying assumptions: exact centering, a central point x (t 0 ) is given algorithm: define tolerance ǫ (0,1), parameter β > 0, and initial values t := t 0, x := x (t 0 ), z := z (t 0 ), s := b Ax (t 0 ) repeat until m/t ǫ: compute tangent direction x tg, s tg, z tg at x, s, z determine α by solving Ψ(x+α x tg,z +α z tg ) = β and take x := x+α x tg, z := z +α z tg, s := b Ax set t := m/(s T z) and use Newton s method to compute x := x (t), z := z (t), s := b Ax Barrier method 14 30

36 Iteration complexity bound on potential function in tangent direction (proof on next page): Ψ(x+α x tg,z +α z tg ) α m log(1 α m) lower bound on predictor step length α: α m γ with γ the solution of γ log(1 γ) = β reduction in duality gap after one predictor-corrector cycle: t γ = 1 α 1 exp( γ ) t + m m bound on total #Newton iterations to reach t (k) m/ǫ: ( ( )) mlog m O ǫt 0 Barrier method 14 31

37 proof of bound on Ψ: let s + = s+α s tg, z + = z +α z tg from definition of Ψ and (s + ) T z + = (1 α)s T z: Ψ(x +,z + ) Ψ(x,z) = mlog (s+ ) T z + z T s = mlog(1 α) m i=1 m i=1 (log s+ i s i +log z+ i z i ) (log s+ i s i +log z+ i z i ) define a (2m)-vector w = (diag(s) 1 s tg,diag(z) 1 z tg ) m (log(s + i /s i)+log(z + i /z i)) = 2m i=1 i=1 log(1+αw i ) α1 T w α w log(1 α w ) last inequality can be proved as on page Barrier method 14 32

38 from the properties on page and s = (1/t)z: 1 T w = t(s T z tg +z T s tg ) = ts T z = m w 2 = s T tgdiag(s) 2 s tg + ztgdiag(z) T 2 z tg = t(z T s tg +s T z tg ) = m substituting this in the upper bound on Ψ gives Ψ(x +,z + ) Ψ(x,z) mlog(1 α)+αm α m log(1 α m) α m log(1 α m) Barrier method 14 33

39 Conclusion: barrier methods started at x (t 0 ), find ǫ-suboptimal point after O ( ( )) mlog m ǫt 0 Newton iterations analysis can be modified to account for inexact centering end-to-end complexity analysis must include the cost of phase I parameters were chosen to simplify analysis, not for efficiency in practice Barrier method 14 34

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Lecture 24: August 28

Lecture 24: August 28 10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

Analytic Center Cutting-Plane Method

Analytic Center Cutting-Plane Method Analytic Center Cutting-Plane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cutting-plane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower

More information

10. Unconstrained minimization

10. Unconstrained minimization Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation

More information

Inequality constrained minimization: log-barrier method

Inequality constrained minimization: log-barrier method Inequality constrained minimization: log-barrier method We wish to solve min c T x subject to Ax b with n = 50 and m = 100. We use the barrier method with logarithmic barrier function m ϕ(x) = log( (a

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Convex Optimization. 9. Unconstrained minimization. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Convex Optimization. 9. Unconstrained minimization. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University Convex Optimization 9. Unconstrained minimization Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2017 Autumn Semester SJTU Ying Cui 1 / 40 Outline Unconstrained minimization

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725 Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...

More information

Unconstrained minimization

Unconstrained minimization CSCI5254: Convex Optimization & Its Applications Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions 1 Unconstrained

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Unconstrained minimization: assumptions

Unconstrained minimization: assumptions Unconstrained minimization I terminology and assumptions I gradient descent method I steepest descent method I Newton s method I self-concordant functions I implementation IOE 611: Nonlinear Programming,

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

ORIE 6326: Convex Optimization. Quasi-Newton Methods

ORIE 6326: Convex Optimization. Quasi-Newton Methods ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Lecture 16: October 22

Lecture 16: October 22 0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 218): Convex Optimiation and Approximation Instructor: Morit Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowit Email: msimchow+ee227c@berkeley.edu

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

Lecture 3 Interior Point Methods and Nonlinear Optimization

Lecture 3 Interior Point Methods and Nonlinear Optimization Lecture 3 Interior Point Methods and Nonlinear Optimization Robert J. Vanderbei April 16, 2012 Machine Learning Summer School La Palma http://www.princeton.edu/ rvdb Example: Basis Pursuit Denoising L

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Probabilistic Graphical Models. Theory of Variational Inference: Inner and Outer Approximation. Lecture 15, March 4, 2013

Probabilistic Graphical Models. Theory of Variational Inference: Inner and Outer Approximation. Lecture 15, March 4, 2013 School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Junming Yin Lecture 15, March 4, 2013 Reading: W & J Book Chapters 1 Roadmap Two

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

A polynomial-time interior-point method for conic optimization, with inexact barrier evaluations

A polynomial-time interior-point method for conic optimization, with inexact barrier evaluations A polynomial-time interior-point method for conic optimization, with inexact barrier evaluations Simon P. Schurr Dianne P. O Leary André L. Tits Technical Report University of Maryland March 9, 2008 Abstract

More information

Supplement: Universal Self-Concordant Barrier Functions

Supplement: Universal Self-Concordant Barrier Functions IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Lecture 18 Oct. 30, 2014

Lecture 18 Oct. 30, 2014 CS 224: Advanced Algorithms Fall 214 Lecture 18 Oct. 3, 214 Prof. Jelani Nelson Scribe: Xiaoyu He 1 Overview In this lecture we will describe a path-following implementation of the Interior Point Method

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Bellman s Curse of Dimensionality

Bellman s Curse of Dimensionality Bellman s Curse of Dimensionality n- dimensional state space Number of states grows exponen

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: 1-5: Least-squares I A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: f (x) =(Ax b) T (Ax b) =x T A T Ax 2b T Ax + b T b f (x) = 2A T Ax 2A T b = 0 Chih-Jen Lin (National

More information

A Distributed Newton Method for Network Utility Maximization

A Distributed Newton Method for Network Utility Maximization A Distributed Newton Method for Networ Utility Maximization Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie Abstract Most existing wor uses dual decomposition and subgradient methods to solve Networ Utility

More information

14 : Theory of Variational Inference: Inner and Outer Approximation

14 : Theory of Variational Inference: Inner and Outer Approximation 10-708: Probabilistic Graphical Models 10-708, Spring 2017 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Maria Ryskina, Yen-Chia Hsu 1 Introduction

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Modern Optimal Control

Modern Optimal Control Modern Optimal Control Matthew M. Peet Arizona State University Lecture 19: Stabilization via LMIs Optimization Optimization can be posed in functional form: min x F objective function : inequality constraints

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

x 1. x n i + x 2 j (x 1, x 2, x 3 ) = x 1 j + x 3

x 1. x n i + x 2 j (x 1, x 2, x 3 ) = x 1 j + x 3 Version: 4/1/06. Note: These notes are mostly from my 5B course, with the addition of the part on components and projections. Look them over to make sure that we are on the same page as regards inner-products,

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity

More information

Functions of Several Variables

Functions of Several Variables Jim Lambers MAT 419/519 Summer Session 2011-12 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Functions of Several Variables We now generalize the results from the previous section,

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information