Improving Performance of The Interior Point Method by Preconditioning

Similar documents
Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

GAMINGRE 8/1/ of 7

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods

Lecture 18: Optimization Programming

10 Numerical methods for constrained problems

Linear programming II

Lecture 17: Primal-dual interior-point methods part II


LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Nonlinear Optimization for Optimal Control

Apolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

SVM May 2007 DOE-PI Dianne P. O Leary c 2007

Sec. 14.3: Partial Derivatives. All of the following are ways of representing the derivative. y dx

LECTURE 13 LECTURE OUTLINE

Projection methods to solve SDP

SF2822 Applied nonlinear optimization, final exam Saturday December

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Multidisciplinary System Design Optimization (MSDO)

Eidul- Adha Break. Darul Arqam North Scope and Sequence Revised 6/01/18 8 th Algebra I. 1 st Quarter (41 Days)

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

Lecture 8. Strong Duality Results. September 22, 2008

Optimisation in Higher Dimensions

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Second-order cone programming

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Algorithms for nonlinear programming problems II

IP-PCG An interior point algorithm for nonlinear constrained optimization

Solving quadratic multicommodity problems through an interior-point algorithm

Lecture 24: August 28

Interior Point Methods in Mathematical Programming

7.4 The Saddle Point Stokes Problem

Lectures Notes Algorithms and Preconditioning in PDE-Constrained Optimization. Prof. Dr. R. Herzog

CS711008Z Algorithm Design and Analysis

The Q Method for Symmetric Cone Programmin

Interior-Point Methods for Linear Optimization

CHAPTER 1 EXPRESSIONS, EQUATIONS, FUNCTIONS (ORDER OF OPERATIONS AND PROPERTIES OF NUMBERS)

Experiments with iterative computation of search directions within interior methods for constrained optimization

Review for Exam 2 Ben Wang and Mark Styczynski

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Linear and Combinatorial Optimization

3 The Simplex Method. 3.1 Basic Solutions

Tutorial on Convex Optimization: Part II

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

Operations Research Lecture 4: Linear Programming Interior Point Method

Primal-dual IPM with Asymmetric Barrier

Numerical Optimization. Review: Unconstrained Optimization

Lecture 17: Iterative Methods and Sparse Linear Algebra

Primal/Dual Decomposition Methods

Computational Finance

Sparse Optimization Lecture: Basic Sparse Optimization Models

Lagrangian Duality Theory

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Numerical Methods I Non-Square and Sparse Linear Systems

JUST THE MATHS UNIT NUMBER 1.6. ALGEBRA 6 (Formulae and algebraic equations) A.J.Hobson

The World Bank. Key Dates. Project Development Objectives. Components. Implementation Status & Results Report. Key Project Dates

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

10725/36725 Optimization Homework 4

Constrained Optimization

Introduction to Nonlinear Stochastic Programming

Numerical Methods for Inverse Kinematics

February 20 Math 3260 sec. 56 Spring 2018

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

COMP 558 lecture 18 Nov. 15, 2010

Homework for Math , Spring 2013

Numerical optimization

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

1 Number Systems and Errors 1

Lecture 3. Optimization Problems and Iterative Algorithms

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

Folsom Dam Water Control Manual Update Joint Federal Project, Folsom Dam

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

Convex Optimization M2

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Interior Point Methods for Mathematical Programming

Introduction to Alternating Direction Method of Multipliers

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Applications of Linear Programming

Direct and Incomplete Cholesky Factorizations with Static Supernodes

5. Duality. Lagrangian

Numerical Nonlinear Optimization with WORHP

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Lecture 18 Oct. 30, 2014

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Review Solutions, Exam 2, Operations Research

12. Interior-point methods

Written Examination

DAILY QUESTIONS 28 TH JUNE 18 REASONING - CALENDAR

Support Vector Machines

Nonlinear Optimization: What s important?

AdvAlg9.7LogarithmsToBasesOtherThan10.notebook. March 08, 2018

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Transcription:

Improving Performance of The Interior Point Method by Preconditioning Mid-Point Status Report Project by: Ken Ryals For: AMSC 663-664 Fall 27-Spring 28 6 December 27

Background / Refresher The IPM method solves a sequence of optimization problems using penalty functions such that the sequence of solutions approaches the true solution from within the valid region. As the penalty functions are relaxed and the problem is re-solved, the numerical properties of the problem become more interesting as the system approaches the true constrained optimization problem. μ 12/12/27 2

Application Why is the IPM method of interest? It applies to a wide range of problem types: Linear Constrained Optimization Semidefinite Problems Second Order Cone Problems Once in the good region of a solution to the set of problems in the solution path: Convergence properties are great ( quadratic ). It keeps the iterates in the valid region. Specific Research Problem: Optimization of Distributed Command and Control 12/12/27 3

Optimization Problem The linear optimization problem can be formulated follows: inf{ c T x Ax = b}. The search direction is implicitly defined by the system: Δx + πδz = r A Δx = A T Δy + Δz =. For this, the Reduced Equation is: A π A T Δy = Ar (= b) From Δy we can get Δx = r π ( A T Δy ). x is the unknown y is the dual of x z is the slack Def: π = D D, where: π z = x, so D is the metric geometric mean of X and Z 1 From these three equations, the Reduced Equations for Δy are: A π A T Δy = Ar (= b) 12/12/27 4

Reminder The Math Behind It All We are solving: A π A T Δy = Ar (= b) A is not square, so it isn t invertible; but AA T is What if we pre-multiplied by (AA T ) -1? (AA T ) -1 A π A T Δy = (AA T ) -1 Ar Conceptually, we have: (AT) -1 π A T Δy = (AAT) -1 b Since, this looks like a similarity transform, it might have nice properties 12/12/27 5

The Project Goal: Develop a more stable LP IPM solver. Develop a Matlab system to apply the IPM method using the preconditioned conjugate gradient solver for the linear system of equations using (AA T ) -1 as the preconditionner. Also, incorporate the stability benefits of factorization in the system. Time permitting, apply one speed improvement to Matlab solver. Application: Solve the OSD A&T distributed command and control problem. 12/12/27 6

Development Test Problems A simple canonical problem is available for use during development. min (-x 1-2x 2 ) subject to the following constraints: -2x 1 + x 2 + x 3 = 2 -x 1 + 2x 2 + x 4 = 7 x 1 + 2x 2 + x 5 = 3 x 1 ; x 2 ; x 3 ; x 4 ; x 5 for which the closed form solution is: x 2 1½ x 1 = 3-2x 2 x 5 = x 3 = 8-5x 2 x 4 = 1-4x 2 The AFIRO problem has been identified and a version obtained for use during development as a suitable test case until data for the OSD A&T application is available. Published solutions exist from several standard solvers. 12/12/27 7

The AFIRO Test Problem X: 51 parameters C: depends on 5 of 51 parameters A: 27 constraint equations (12 non-zeros) 1 Sparsity Pattern for Matrix A 1 19 27 1 14 27 4 51 12/12/27 8

Intermediate Results - AFIRO Started with all 51 values of x and z Ended with 31 x s and 22 z s = 2 parameters (15 & 17) had x=z= Condition Numbers Initial Iteration 15 Iteration 3 D 2 8523 1.36e+22 6.9e+35 AD 2 A T 1211 1.73e+2 1.44e+34 QR(AD 2 A T ) 11 1.3e+1 1.99e+17 Parameter Values 5 4 3 2 1 Solution Parameter Paths 4 8 12 16 2 24 28 32 Iteration Number 12/12/27 9

Development Process Flow Start Develop Basic IPM System in Matlab Obtain Test Data Done Planned No Yes Yes Add Preconditionner to Basic Matlab IPM Solution Match? Time for Development? Add PCG Solver No Yes Yes No Solution Match? Add Factorization Solver Time for Development? Solution Match? No Yes No Conduct Final V&V Obtain OSD A&T Data Test on OSD A&T Data Identify Areas for Speed Increase 12/12/27 1

Schedule / Progress Fall 27 Task Name Obtain AFIRO Data Develop Basic IPM System in Matlab Test Code Add Preconditioner to Basic Matlab IPM Test Code Brief Fall 27 Progress Duration (days) 5 15 2 15 2 1 Plan Dates Start End 1-Oct-27 8-Oct-27 9-Oct-27 2-Nov-27 5-Nov-27 7-Nov-27 8-Nov-27 6-Dec-27 7-Dec-27 11-Dec-27 12-Dec-27 13-Dec-27 Add PCG Solver 15 14-Dec-27 24-Jan-28 Spring 28 Test Code Add Factorization Solver Test Code Conduct V&V Test on OSD/A&T Data Identify Areas for Speed Improvements Incorporate One Speed improvement 2 15 2 15 1 5 15 25-Jan-28 3-Jan-28 21-Feb-28 26-Feb-28 19-Mar-28 3-Apr-28 11-Apr-28 29-Jan-28 2-Feb-28 25-Feb-28 18-Mar-28 2-Apr-28 1-Apr-28 2-May-28 Conduct Incremental V&V 3 5-May-28 8-May-28 Update OSD/A&T Testing 2 9-May-28 13-May-28 Brief Spring 28 Progress 2 14-May-28 15-May-28 12/12/27 11

Status - Summary In summary, the development is slightly ahead of schedule Caveat: When things appear to be going well, it proves that you don t know how things are really going. Risk area: Obtaining the OSD A&T data, even in a sanitized form, may be difficult due to delays in the parent OSD A&T project resulting from the delay in Congress passing a DoD appropriations bill. Mitigation Strategy: The following NETLIB LP test problems are of appropriate dimension to use as testing surrogates: KB2, SC5A, SC5B, ADLITTLE (in increasing dimension) 12/12/27 12

Backup Material 12/12/27 13

Developing The System Of Equations (1/3) (Primal) Problem: Min c T x Dual Problem Max b T y subject to Ax=b with x subject to A T y c Alternately, subject to A T y+z=c with z Penalty function augmented version: Min B(x; µ) = c T x - µ ln x i Optimality Conditions: c - µx -1 e - A T y = Ax - b = 12/12/27 14

Developing The System Of Equations (2/3) Collecting all these conditions: Ax - b = A T y+z=c z x c - µx -1 e - A T y = Xz = µe This produces the system of equations: Xz - µe = Ax - b = A T y+z - c = 12/12/27 15

12/12/27 16 Developing The System Of Equations (3/3) Solve this system using Newton s method: Newton s method increments x by: J(x)Δx = -gradient(x) So, the Newton step is: If we multiply the first equation by X -1 we get: Δx + (X -1 Z) Δz = µ X -1 e Z Δx + π Δz = r Similarly, the next two lines produce: A Δx = A T Δy + Δz = = I A A X Z J T : is Jacobian The = Δ Δ Δ z y A c - Ax b - Xz µe - T z y x I A A X Z T