Inequality constrained minimization: log-barrier method

Similar documents
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

12. Interior-point methods

Interior Point Algorithms for Constrained Convex Optimization

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Barrier Method. Javier Peña Convex Optimization /36-725

12. Interior-point methods

Lecture 9 Sequential unconstrained minimization

EE364a Homework 8 solutions

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Convex Optimization and l 1 -minimization

Primal-Dual Interior-Point Methods

Lecture 14 Barrier method

10 Numerical methods for constrained problems

Nonlinear Optimization for Optimal Control

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Applications of Linear Programming

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009

Homework 4. Convex Optimization /36-725

Advances in Convex Optimization: Theory, Algorithms, and Applications

CS711008Z Algorithm Design and Analysis

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Analytic Center Cutting-Plane Method

Chemical Equilibrium: A Convex Optimization Problem

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

m i=1 c ix i i=1 F ix i F 0, X O.

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

EE364b Homework 5. A ij = φ i (x i,y i ) subject to Ax + s = 0, Ay + t = 0, with variables x, y R n. This is the bi-commodity network flow problem.

A Tutorial on Convex Optimization II: Duality and Interior Point Methods

ICS-E4030 Kernel Methods in Machine Learning

Lecture 5: September 15

5. Duality. Lagrangian

Duality revisited. Javier Peña Convex Optimization /36-725

10. Unconstrained minimization

8. Geometric problems

Lecture 7: Convex Optimizations

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

11. Equality constrained minimization

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Newton s Method. Javier Peña Convex Optimization /36-725

Bellman s Curse of Dimensionality

Lecture 24: August 28

Nonlinear Programming

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Lecture 16: October 22

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Convex Optimization Lecture 13

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

Lecture: Duality of LP, SOCP and SDP

Projection methods to solve SDP

Interior-Point Methods for Linear Optimization

Lecture: Introduction to LP, SDP and SOCP

Optimization Tutorial 1. Basic Gradient Descent

8. Geometric problems

Operations Research Lecture 4: Linear Programming Interior Point Method

More First-Order Optimization Algorithms

Lagrange duality. The Lagrangian. We consider an optimization program of the form

i.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation!

Linear Programming Duality

Lecture 17: Primal-dual interior-point methods part II

Constrained Optimization and Lagrangian Duality

CS-E4830 Kernel Methods in Machine Learning

Solving large Semidefinite Programs - Part 1 and 2

minimize x subject to (x 2)(x 4) u,

EE364b Homework 4. L(y,ν) = (1/2) y x ν(1 T y 1), minimize (1/2) y x 2 2 subject to y 0, 1 T y = 1,

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Interior Point Methods in Mathematical Programming

subject to (x 2)(x 4) u,

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Lagrangian Duality Theory

Second-order cone programming

Convex Optimization & Lagrange Duality


Lecture 15: October 15

Numerical Optimization

Truncated Newton Method

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

Lecture 18: Optimization Programming

Lecture 6: Conic Optimization September 8

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Sparse Optimization Lecture: Basic Sparse Optimization Models

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

4. Convex optimization problems

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Written Examination

18. Primal-dual interior-point methods

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Lecture: Duality.

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

Transcription:

Inequality constrained minimization: log-barrier method We wish to solve min c T x subject to Ax b with n = 50 and m = 100. We use the barrier method with logarithmic barrier function m ϕ(x) = log( (a T i x b i )) i=1 and solve a sequence of smooth unconstrained problems x (t) = argmin x R n tc T x + φ(x) Objective functional augmented with the log-barrier function [f,g,h] = objective_barrier(t,x,a,b,c); [m,n] = size(a); d = A*x - b; D = diag(1./d); f = t*c *x - log(-d )*ones(m,1); g = t*c - A *D*ones(m,1); H = A *D^2*A; Problem parameters m = 100; n = 50; ALPHA =.1; BETA =.7; mu = 50; A = randn(m,n); b = 1 + abs(randn(m,1)); c = randn(n,1); Smooth unconstrained minimization Start with strictly feasible point x = 0 Terminates when t = 10 8 (Duality gap 10 6 ) Centering uses Newton method with backtracking

x = zeros(n,1); t = 1; histobj=[]; NTTOL = 1e-10; MAXITERS = 500; % stop inner iteration if lambda^2/2 < NTTOL while (t <= 1e8), % Outer loop niter = 0; for k=1:maxiters, % Inner loop [val,g,h] = objective_barrier(t,x,a,b,c); v = -H\g; lambda = g *v; % Newton step % Newton decrement % Perform backtracking line search along search direction s = 1; while (min([b - A*(x+s*v) ]) < 0), s = BETA*s; % first get feasible point... then search minimum while (objective_barrier_val(t,x+s*v,a,b,c) > val + ALPHA*s*lambda), s = BETA*s; x = x+s*v; niter = niter + 1; % Test if optimum achieved if (abs(lambda/2) < NTTOL), break; % decrement smaller than NTTOL? % Display progress obj = c *x; histobj=[[histobj],[obj;niter;m/t]]; % Bookkeeeping disp([ obj:,num2str(obj, %1.6e ), ; PDGap:, num2str(m/t, %1.2e ), ; number iterations:,int2str(niter)]); % Update t = mu*t;

Plot results (more Matlab commands to produce the nice plot below) PDGap = histobj(3,:); niter = histobj(2,:); total_iter = cumsum(niter); figure; semilogy(total_iter,pdgap, * ); 10 2 10 1 mu = 2 mu = 50 mu = 150 10 0 10 1 Duality gap 10 2 10 3 10 4 10 5 10 6 10 7 0 20 40 60 80 100 120 140 160 180 200 Total Number of iterations Figure 1: Plot of duality gap vs. total number of Newton iterations for µ = 2, 50, 150

160 140 120 Total Number of iterations 100 80 60 40 20 0 20 40 60 80 100 120 140 160 180 200 mu Figure 2: Trade-off between µ and the total number of Newton iterations needed to reduce the duality gap from 100 to 10 4. The optimization problem is a moderately small inequality constrained LP, just as before. This shows that the method is not very sensitive to the value of µ provided µ 10

Figure 11.7 Progress of barrier method for three randomly generated standard form LPs of different dimensions, showing duality gap versus cumulative number of Newton steps. The number of variables in each problem is n = 2m. Here too we see approximately linear convergence of the duality gap, with a slight increase in the number of Newton steps required for the larger problems. The following figures are taken from our textbook (Boyd and Vandenberghe). 35 PSfrag replacements Newton iterations 30 25 20 15 10 1 10 2 10 3 m Figure 11.8 Average number of Newton steps required to solve 100 randomly generated LPs of different dimensions, with n = 2m. Error bars show standard deviation, around the average value, for each value of m. The growth in the number of Newton steps required, as the problem dimensions range over a 100:1 ratio, is very small.

PSfrag replacements 602 11 Interior-point methods duality gap 10 2 10 0 duality gap 10 2 10 4 10 6 µ = 50 µ = 200 µ = 2 PSfrag replacements 0 20 40 60 80 Newton iterations Figure 11.15 Progress of barrier method for an SOCP, showing duality gap versus cumulative number of Newton steps. 11.6 with xproblems R 50, m with = 50, generalized and A i inequalities R 5 50. The problem instance was randomly 603 generated, in such a way that the problem is strictly primal and dual feasible, and has optimal value p = 1. We start with a point x (0) on the central path, with a duality gap of 140 100. The barrier method is used to solve the problem, using the barrier function 120 m 100 φ(x) = log ( (c T i x + d i ) 2 A i x + b i 2 2). Newton iterations 80 i=1 The centering problems are solved using Newton s method, with the same algorithm 60 parameters as in the examples of 11.3.2: backtracking parameters α = 0.01, β = 0.5, and a stopping 40 criterion λ(x) 2 /2 10 5. Figure 11.15 shows the duality gap versus cumulative number of Newton steps. The plot is very20similar to those for linear and geometric programming, shown in figures 11.4 and 11.6, respectively. We see an approximately constant number 0 of Newton steps required 0 40 per centering 80 step, 120and therefore 160 approximately 200 linear convergence of the duality gap. For this µ example, too, the choice of µ has little effectfigure on the11.16 totaltrade-off numberinofthe Newton choicesteps, of the provided parameter µ µ, isfor ataleast small10 SOCP. or so. As in the examples The vertical foraxis linear shows andthe geometric total number programming, of Newton steps a reasonable required to choice reduce of µ is in the duality gap from 100 to 10 the range 10 100, which results in 3, and the horizontal axis shows µ. a total number of Newton steps around 30 (see figure 11.16). with A small variable SDPx R 100, and F i S 100, G S 100. The problem instance was generated randomly, in such a way that the problem is strictly primal and dual feasible, Our next with example p = 1. is an The SDP initial point is on the central path, with a duality gap of 100. minimize c T x We apply the barrier method with logarithmic n barrier function (11.46)