Implementation of a KKT-based active-set QP solver

Size: px
Start display at page:

Download "Implementation of a KKT-based active-set QP solver"

Transcription

1 Block-LU updates p. 1/27 Implementation of a KKT-based active-set QP solver ISMP th International Symposium on Mathematical Programming Rio de Janeiro, Brazil, July 30 August 4, 2006 Hanh Huynh and Michael Saunders SCCM Program Dept of Management Sci & Eng Stanford University Stanford University Stanford, CA Stanford, CA hhuynh@stanford.edu saunders@stanford.edu

2 Block-LU updates p. 2/27 Abstract Sparse SQP methods such as SNOPT need a QP solver that permits warm starts each major iteration and can handle many degrees of freedom. An active-set QP method with direct KKT solves seems the only option. We discuss some implementation issues such as updating the KKT factors, scaling the QP Hessian, and recovering from KKT singularity. Acknowledgements: Philip Gill, UC San Diego Comsol, Inc

3 Why a new QP solver? Block-LU updates p. 3/27

4 Block-LU updates p. 4/27 SNOPT Sparse SQP solver for NLP (Gill, Murray & Saunders 2005 Sequence of QP subproblems: QP k minimize x subject to g T k x xt H k x linearized constraints and bounds Limited-memory quasi-newton Hessian H 0 = I or diagonal H 1 = (I + vu T H 0 (I + uv T, etc Warm start, few iterations active-set method SQOPT

5 Block-LU updates p. 4/27 SNOPT Sparse SQP solver for NLP (Gill, Murray & Saunders 2005 Sequence of QP subproblems: QP k minimize x subject to g T k x xt H k x linearized constraints and bounds Limited-memory quasi-newton Hessian H 0 = I or diagonal H 1 = (I + vu T H 0 (I + uv T, etc Warm start, few iterations active-set method SQOPT SQOPT s reduced Hessian Z T H k Z can be large

6 Block-LU updates p. 4/27 SNOPT Sparse SQP solver for NLP (Gill, Murray & Saunders 2005 Sequence of QP subproblems: QP k minimize x subject to g T k x xt H k x linearized constraints and bounds Limited-memory quasi-newton Hessian H 0 = I or diagonal H 1 = (I + vu T H 0 (I + uv T, etc Warm start, few iterations active-set method SQOPT SQOPT s reduced Hessian Z T H k Z can be large CG on Z T H k Z works unexpectedly well sometimes

7 Block-LU updates p. 4/27 SNOPT Sparse SQP solver for NLP (Gill, Murray & Saunders 2005 Sequence of QP subproblems: QP k minimize x subject to g T k x xt H k x linearized constraints and bounds Limited-memory quasi-newton Hessian H 0 = I or diagonal H 1 = (I + vu T H 0 (I + uv T, etc Warm start, few iterations active-set method SQOPT SQOPT s reduced Hessian Z T H k Z can be large CG on Z T H k Z works unexpectedly well sometimes Need a QP solver that works with KKT systems

8 Block-LU updates p. 5/27 Linear Algebra Q1 Updating basis factors

9 Block-LU updates p. 6/27 Updating a basis Aim Use SuperLU, PARDISO,... as black box solvers for B 0 Product-form update Simple, but perhaps dense, unstable B k = B 0 T 1 T 2... T k

10 Updating a basis Aim Use SuperLU, PARDISO,... as black box solvers for B 0 Product-form update Simple, but perhaps dense, unstable B k = B 0 T 1 T 2... T k Schur-complement update (Bisschop & Meeraus 1977 Initially: B 0 x = b 0 ( ( Later: B 0 V k x 1 Wk T x 2 = ( b k 0 2 solves with B 0 1 solve with C k = Wk T B 1 0 V k (small, sparse products V k v, Wk T w Block-LU updates p. 6/27

11 Block-LU updates p. 7/27 Linear Algebra Q2 Updating KKT factors

12 Block-LU updates p. 8/27 Updating KKT systems for QP active-set solver Aim Use MA57, PARDISO,... as black box solvers for K 0 Initially: K 0 y = d Later: ( K 0 V T k V k ( y 1 y 2 = ( d 1 d 2 Existing work: Gould & Toint, Gondzio

13 Block-LU updates p. 9/27 QPA in GALAHAD Active-set QP solver (Gould and Toint 2001 Sequence of updated KKT systems ( ( K 0 V = V T I V T K 1 0 I ( K 0 V C K 0 = L 0 D 0 L T 0 via MA27 or MA57 C = V T K0 1 V factored by SCU (small

14 Block-LU updates p. 9/27 QPA in GALAHAD Active-set QP solver (Gould and Toint 2001 Sequence of updated KKT systems ( ( K 0 V = V T I V T K 1 0 I ( K 0 V C K 0 = L 0 D 0 L T 0 via MA27 or MA57 C = V T K0 1 V factored by SCU (small 2 solves with K 0, 1 solve with C, products V v, V T w

15 QPA in GALAHAD Active-set QP solver (Gould and Toint 2001 Sequence of updated KKT systems ( ( K 0 V = V T I V T K 1 0 I ( K 0 V C K 0 = L 0 D 0 L T 0 via MA27 or MA57 C = V T K0 1 V factored by SCU (small 2 solves with K 0, 1 solve with C, products V v, V T w 1 solve with K 0 if K0 1 V were stored (but it is fairly dense Block-LU updates p. 9/27

16 Block-LU updates p. 10/27 Symmetric Block-LU updates ( K 0 V T V = ( ( L 0 D 0 L T Y T D0 1 0 I Y C L 0 Y = V, C = Y T D 1 0 Y K 0 = L 0 D 0 L T 0 via MA27, MA57, PARDISO,... C factored by LUMOD (LC = U, L square, small

17 Block-LU updates p. 10/27 Symmetric Block-LU updates ( K 0 V T V = ( ( L 0 D 0 L T Y T D0 1 0 I Y C L 0 Y = V, C = Y T D 1 0 Y K 0 = L 0 D 0 L T 0 via MA27, MA57, PARDISO,... C factored by LUMOD (LC = U, L square, small Solves with L 0, D 0, C, D 0, L T 0, MA57 treats L 0, D 0 as two black boxes products with Y, Y T Y is likely to be well-conditioned and sparse (Thanks Iain!

18 Block-LU updates p. 11/27 Unsymmetric Block-LU updates ( K 0 W T V = ( L 0 Z T I ( U 0 Y C L 0 Y = V, U T 0 Z = W, C = Z T Y K 0 = L 0 U 0 via LUSOL, SuperLU, UMFPACK, PARDISO,... Solves with L 0, U 0, C, products with Y, Z T L 0, U 0 are two black boxes Y and Z are likely to be sparse

19 Block-LU updates p. 12/27 QPBLU Active-set QP solver (Hanh Huynh s thesis QP minimize c T x + 1 x 2 xt Hx subject to Ax = b, l x u Matlab prototype, F90 version under way Block-LU updates of KKT factors K 0 = L 0 U 0 with black-box solvers for L 0, U 0 SNOPT s H 1 = (I + vu T H 0 (I + uv T, etc handled by block-lu updates

20 Block-LU updates p. 13/27 Experiments with Matlab QPBLU QP minimize c T x + 1 x 2 xt x subject to Ax = b, l x u QP problems with H = I A, b, c, l, u come from LPnetlib collection (Tim Davis [L0,U0,P,Q] = lu(k0 via UMFPACK (Tim Davis 20 Block-LU updates then factorize current KKT

21 Block-LU updates p. 14/27 Block-LU updates ( K 0 W T V = ( L 0 Z T I ( U 0 Y C Two possible implementations: L 0 = I, U 0 = K 0 Separate L 0 and U 0 Y = V L 0 Y = V U0 TLT 0 Z = W U 0 TZ = W Compare nonzeros in Y and Z

22 Black-box K 0 vs separate L 0, U 0 Data Source: capri, K0 = I*K0 Number of Nonzeros Iterations Data Source: capri, K0 = L0*U0 Y Z Number of Nonzeros Iterations Y Z Block-LU updates p. 15/27

23 Block-LU updates p. 16/27 Linear Algebra Q3 Rank-revealing factors

24 Block-LU updates p. 17/27 LUSOL Three pivoting options: TPP TRP TCP Threshold Partial Pivoting Threshold Rook Pivoting Threshold Complete Pivoting

25 Block-LU updates p. 18/27 TPP: Threshold Partial Pivoting Require L ij τ, τ = 2.0 say (not 1.0

26 Block-LU updates p. 19/27 TRP: Threshold Rook Pivoting

27 Block-LU updates p. 20/27 TCP: Threshold Complete Pivoting

28 Block-LU updates p. 21/27 Rank-Revealing Factors A = XDY T = Demmel et al. (1999: X, Y well-conditioned cond(a cond(d SVD QR with column interchanges LU with Rook Pivoting LU with Complete Pivoting UDV T QDR LDU LDU

29 Block-LU updates p. 22/27 Rank-Revealing Factors A = XDY T = Demmel et al. (1999: X, Y well-conditioned cond(a cond(d SVD QR with column interchanges LU with Rook Pivoting LU with Complete Pivoting MA27, MA57 (L ij bounded, D block-diag UDV T QDR LDU LDU LDL T

30 Block-LU updates p. 23/27 Linear Algebra Q4 KKT repair

31 Block-LU updates p. 24/27 Two-stage KKT repair K = ( H A A T [L,U,p,q] = lusol(a with Threshold Rook Pivoting detects singularities in A [L,D,p] = ma57(k with strict pivot tolerance detects singularities in K

32 Block-LU updates p. 25/27 Linear Algebra Q5 Condition of K 0

33 Scaling H As we know from least squares, ( αi A A T is better conditioned if α σ min (A. Hence, QPBLU solves QP minimize α(c T x + 1 x 2 xt Hx + ω suminf subject to Ax = b, l x u K 0 = ( αh 0 A T 0 A 0 is better conditioned (if we can guess good α. Schur-complements C k = V T k K 1 0 V k also. Block-LU updates p. 26/27

34 Block-LU updates p. 27/27 QPBLU active-set QP solver Summary KKT solves: K 0 = ( K 0 V Block-LU updates: W T D H 0 A T 0 A 0 = 1 A ( L 0 Z T I ( U 0 Y C Black-box solvers for L 0, U 0 Y, Z sparse, so worth storing One hope for parallelism in LP/QP/NLP solvers

35 Block-LU updates p. 27/27 QPBLU active-set QP solver Summary KKT solves: K 0 = ( K 0 V Block-LU updates: W T D H 0 A T 0 A 0 = 1 A ( L 0 Z T I ( U 0 Y C Black-box solvers for L 0, U 0 Y, Z sparse, so worth storing One hope for parallelism in LP/QP/NLP solvers Sparse LU, LDL solvers LUSOL, MA27, MA57 already suitable Need L ij 2 (say to be rank-revealing

36 QPBLU active-set QP solver Summary KKT solves: K 0 = ( K 0 V Block-LU updates: W T D H 0 A T 0 A 0 = 1 A ( L 0 Z T I ( U 0 Y C Black-box solvers for L 0, U 0 Y, Z sparse, so worth storing One hope for parallelism in LP/QP/NLP solvers Sparse LU, LDL solvers LUSOL, MA27, MA57 already suitable Need L ij 2 (say to be rank-revealing Request to SuperLU, PARDISO, UMFPACK,... developers Separate solves with L 0, D 0, U 0 At least one factor well-conditioned (tell us which one! Options for rank-revealing factors of K 0 Block-LU updates p. 27/27

A convex QP solver based on block-lu updates

A convex QP solver based on block-lu updates Block-LU updates p. 1/24 A convex QP solver based on block-lu updates PP06 SIAM Conference on Parallel Processing for Scientific Computing San Francisco, CA, Feb 22 24, 2006 Hanh Huynh and Michael Saunders

More information

LUSOL: A basis package for constrained optimization

LUSOL: A basis package for constrained optimization LUSOL: A basis package for constrained optimization IFORS Triennial Conference on OR/MS Honolulu, HI, July 11 15, 2005 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

Sparse Rank-Revealing LU Factorization

Sparse Rank-Revealing LU Factorization Sparse Rank-Revealing LU Factorization SIAM Conference on Optimization Toronto, May 2002 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management Sci & Eng University of Auckland

More information

Sparse Rank-Revealing LU Factorization

Sparse Rank-Revealing LU Factorization Sparse Rank-Revealing LU Factorization Householder Symposium XV on Numerical Linear Algebra Peebles, Scotland, June 2002 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

Ax=b. Zack 10/4/2013

Ax=b. Zack 10/4/2013 Ax=b Zack 10/4/2013 Iteration method Ax=b v 1, v 2 x k = V k y k Given (A, b) standard orthonormal base v 1, v 2 x k = V k y k For symmetric A, the method to generate V k is called Lanczos Method Lanczos

More information

A review of sparsity vs stability in LU updates

A review of sparsity vs stability in LU updates A review of sparsity vs stability in LU updates Michael Saunders Dept of Management Science and Engineering (MS&E) Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Satisfying Flux Balance and Mass-Action Kinetics in a Network of Biochemical Reactions

Satisfying Flux Balance and Mass-Action Kinetics in a Network of Biochemical Reactions Satisfying Flux Balance and Mass-Action Kinetics in a Network of Biochemical Reactions Speakers Senior Personnel Collaborators Michael Saunders and Ines Thiele Ronan Fleming, Bernhard Palsson, Yinyu Ye

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

A REGULARIZED ACTIVE-SET METHOD FOR SPARSE CONVEX QUADRATIC PROGRAMMING

A REGULARIZED ACTIVE-SET METHOD FOR SPARSE CONVEX QUADRATIC PROGRAMMING A REGULARIZED ACTIVE-SET METHOD FOR SPARSE CONVEX QUADRATIC PROGRAMMING A DISSERTATION SUBMITTED TO THE INSTITUTE FOR COMPUTATIONAL AND MATHEMATICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 318 (CME 338 Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2018 Notes 7: LUSOL: a Basis Factorization

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2017 Notes 10: MINOS Part 1: the Reduced-Gradient

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-09-14 1. There was a goof in HW 2, problem 1 (now fixed) please re-download if you have already started looking at it. 2. CS colloquium (4:15 in Gates G01) this Thurs is Margaret

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Experiments with iterative computation of search directions within interior methods for constrained optimization

Experiments with iterative computation of search directions within interior methods for constrained optimization Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

5.5 Quadratic programming

5.5 Quadratic programming 5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the

More information

Trajectory Planning and Collision Detection for Robotics Applications

Trajectory Planning and Collision Detection for Robotics Applications Trajectory Planning and Collision Detection for Robotics Applications joint work with Rene Henrion, Dietmar Homberg, Chantal Landry (WIAS) Institute of Mathematics and Applied Computing Department of Aerospace

More information

2 P. E. GILL, W. MURRAY AND M. A. SAUNDERS We assume that the nonlinear functions are smooth and that their rst derivatives are available (and possibl

2 P. E. GILL, W. MURRAY AND M. A. SAUNDERS We assume that the nonlinear functions are smooth and that their rst derivatives are available (and possibl SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED OPTIMIZATION PHILIP E. GILL y, WALTER MURRAY z, AND MICHAEL A. SAUNDERS z Abstract. Sequential quadratic programming (SQP) methods have proved highly

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

A dissection solver with kernel detection for unsymmetric matrices in FreeFem++

A dissection solver with kernel detection for unsymmetric matrices in FreeFem++ . p.1/21 11 Dec. 2014, LJLL, Paris FreeFem++ workshop A dissection solver with kernel detection for unsymmetric matrices in FreeFem++ Atsushi Suzuki Atsushi.Suzuki@ann.jussieu.fr Joint work with François-Xavier

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Optimization Research at SOL

Optimization Research at SOL SOL Research p. 1/44 Optimization Research at SOL A review of the last 30 years Michael SAUNDERS Systems Optimization Laboratory Dept of Management Science and Engineering Stanford University Stanford,

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization

SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization Philip E. Gill; Department of Mathematics, University of California, San Diego, La Jolla, CA Walter Murray, Michael A. Saunders; Department

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Reduced-Hessian Methods for Constrained Optimization

Reduced-Hessian Methods for Constrained Optimization Reduced-Hessian Methods for Constrained Optimization Philip E. Gill University of California, San Diego Joint work with: Michael Ferry & Elizabeth Wong 11th US & Mexico Workshop on Optimization and its

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 4: The Primal Simplex Method 1 Linear

More information

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple

More information

LU Preconditioning for Overdetermined Sparse Least Squares Problems

LU Preconditioning for Overdetermined Sparse Least Squares Problems LU Preconditioning for Overdetermined Sparse Least Squares Problems Gary W. Howell 1 and Marc Baboulin 2 1 North Carolina State University, USA gwhowell@ncsu.edu 2 Université Paris-Sud and Inria, France

More information

Lecture 17: Iterative Methods and Sparse Linear Algebra

Lecture 17: Iterative Methods and Sparse Linear Algebra Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description

More information

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of

More information

Iterative Solution of Sparse Linear Least Squares using LU Factorization

Iterative Solution of Sparse Linear Least Squares using LU Factorization Iterative Solution of Sparse Linear Least Squares using LU Factorization Gary W. Howell Marc Baboulin Abstract In this paper, we are interested in computing the solution of an overdetermined sparse linear

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Revised Simplex Method

Revised Simplex Method DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Sparse linear solvers

Sparse linear solvers Sparse linear solvers Laura Grigori ALPINES INRIA and LJLL, UPMC On sabbatical at UC Berkeley March 2015 Plan Sparse linear solvers Sparse matrices and graphs Classes of linear solvers Sparse Cholesky

More information

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Linear Algebra in IPMs

Linear Algebra in IPMs School of Mathematics H E U N I V E R S I Y O H F E D I N B U R G Linear Algebra in IPMs Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio Paris, January 2018 1 Outline Linear

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version

LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version Amal Khabou James Demmel Laura Grigori Ming Gu Electrical Engineering and Computer Sciences University of California

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

V C V L T I 0 C V B 1 V T 0 I. l nk

V C V L T I 0 C V B 1 V T 0 I. l nk Multifrontal Method Kailai Xu September 16, 2017 Main observation. Consider the LDL T decomposition of a SPD matrix [ ] [ ] [ ] [ ] B V T L 0 I 0 L T L A = = 1 V T V C V L T I 0 C V B 1 V T, 0 I where

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

Low-Rank Approximations, Random Sampling and Subspace Iteration

Low-Rank Approximations, Random Sampling and Subspace Iteration Modified from Talk in 2012 For RNLA study group, March 2015 Ming Gu, UC Berkeley mgu@math.berkeley.edu Low-Rank Approximations, Random Sampling and Subspace Iteration Content! Approaches for low-rank matrix

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 3, pp. 863 897 c 2005 Society for Industrial and Applied Mathematics A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR NONLINEAR OPTIMIZATION MICHAEL P. FRIEDLANDER

More information

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory

Preprint ANL/MCS-P , Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory Preprint ANL/MCS-P1015-1202, Dec 2002 (Revised Nov 2003, Mar 2004) Mathematics and Computer Science Division Argonne National Laboratory A GLOBALLY CONVERGENT LINEARLY CONSTRAINED LAGRANGIAN METHOD FOR

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Towards a stable static pivoting strategy for the sequential and parallel solution of sparse symmetric indefinite systems

Towards a stable static pivoting strategy for the sequential and parallel solution of sparse symmetric indefinite systems Technical Report RAL-TR-2005-007 Towards a stable static pivoting strategy for the sequential and parallel solution of sparse symmetric indefinite systems Iain S. Duff and Stéphane Pralet April 22, 2005

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

Using Random Butterfly Transformations to Avoid Pivoting in Sparse Direct Methods

Using Random Butterfly Transformations to Avoid Pivoting in Sparse Direct Methods Using Random Butterfly Transformations to Avoid Pivoting in Sparse Direct Methods Marc Baboulin 1, Xiaoye S. Li 2 and François-Henry Rouet 2 1 University of Paris-Sud, Inria Saclay, France 2 Lawrence Berkeley

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Large-scale Collaborative Ranking in Near-Linear Time

Large-scale Collaborative Ranking in Near-Linear Time Large-scale Collaborative Ranking in Near-Linear Time Liwei Wu Depts of Statistics and Computer Science UC Davis KDD 17, Halifax, Canada August 13-17, 2017 Joint work with Cho-Jui Hsieh and James Sharpnack

More information

A New Low Rank Quasi-Newton Update Scheme for Nonlinear Programming

A New Low Rank Quasi-Newton Update Scheme for Nonlinear Programming A New Low Rank Quasi-Newton Update Scheme for Nonlinear Programming Roger Fletcher Numerical Analysis Report NA/223, August 2005 Abstract A new quasi-newton scheme for updating a low rank positive semi-definite

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher

The use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure

More information

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Matrix Multiplication Chapter IV Special Linear Systems

Matrix Multiplication Chapter IV Special Linear Systems Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems

A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems Nicholas I. M. Gould and Daniel P. Robinson February 8, 206 Abstract The details of a solver for minimizing a strictly

More information

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and

More information

ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS

ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS JOHANNES BRUST, OLEG BURDAKOV, JENNIFER B. ERWAY, ROUMMEL F. MARCIA, AND YA-XIANG YUAN Abstract. We present

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

A Column Pre-ordering Strategy for the Unsymmetric-Pattern Multifrontal Method

A Column Pre-ordering Strategy for the Unsymmetric-Pattern Multifrontal Method A Column Pre-ordering Strategy for the Unsymmetric-Pattern Multifrontal Method TIMOTHY A. DAVIS University of Florida A new method for sparse LU factorization is presented that combines a column pre-ordering

More information

arxiv: v1 [math.na] 8 Jun 2018

arxiv: v1 [math.na] 8 Jun 2018 arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical

More information

Sparse factorization using low rank submatrices. Cleve Ashcraft LSTC 2010 MUMPS User Group Meeting April 15-16, 2010 Toulouse, FRANCE

Sparse factorization using low rank submatrices. Cleve Ashcraft LSTC 2010 MUMPS User Group Meeting April 15-16, 2010 Toulouse, FRANCE Sparse factorization using low rank submatrices Cleve Ashcraft LSTC cleve@lstc.com 21 MUMPS User Group Meeting April 15-16, 21 Toulouse, FRANCE ftp.lstc.com:outgoing/cleve/mumps1 Ashcraft.pdf 1 LSTC Livermore

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information