Experiments with iterative computation of search directions within interior methods for constrained optimization

Size: px
Start display at page:

Download "Experiments with iterative computation of search directions within interior methods for constrained optimization"

Transcription

1 Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering (ICME) Stanford University Twelfth Copper Mountain Conference on Iterative Methods Copper Mountain, Colorado March 25 30, 2012 Akle and Saunders, ICME, Stanford Copper Mountain /31

2 Outline 1 PDCO 2 Iterative solvers 3 colnorms, QR 4 Partial Cholesky or QR 5 Numerical results Akle and Saunders, ICME, Stanford Copper Mountain /31

3 Abstract Our primal-dual interior-point optimizer PDCO has found many applications for optimization problems of the form min ϕ(x) st Ax = b, l x u, in which ϕ(x) is convex and A is a sparse matrix or a linear operator. We focus on the latter case and the need for iterative methods to compute dual search directions from linear systems of the form AD 2 A T y = r, D diagonal and positive definite. Although the systems are positive definite, they do not need to be solved accurately and there is reason to use MINRES rather than CG (see PhD thesis of David Fong (2011)). When the original problem is regularized, the systems can be converted to least-squares problems and there is similar reason to use LSMR rather than LSQR. Also, D becomes increasingly ill-conditioned as the interior method proceeds and there is need for some kind of preconditioning, such as the partial Cholesky approach of Bellavia, Gondzio and Morini (2011). We present numerical results on matters such as these. Akle and Saunders, ICME, Stanford Copper Mountain /31

4 PDCO Primal-Dual Interior Method minimize c T x x subject to Ax = b, x 0, PDCO is a Matlab solver for such problems A may be a sparse matrix or a linear operator Akle and Saunders, ICME, Stanford Copper Mountain /31

5 PDCO Primal-Dual Interior Method minimize c T x + 1 x, r 2 γx r 2 subject to Ax + δr = b, x 0, γ and δ 10 4 for linear programs δ = 1 for nonnegative least-squares PDCO is a Matlab solver for such problems A may be a sparse matrix or a linear operator Akle and Saunders, ICME, Stanford Copper Mountain /31

6 Primal-Dual Interior Method PDCO solves a sequence of nonlinear equations Ax + δ 2 y = b A T y + z = c + γ 2 x Xz = µe X = diag(x) µ 0 Newton s method: A δ 2 I γ 2 I A T I Z X x y = z r 1 r 2 r 3 Akle and Saunders, ICME, Stanford Copper Mountain /31

7 PDCO search direction Define D 2 = (X 1 Z + γ 2 I ) 1 Posdef diagonal with big and small elements Solve either ( AD 2 A T + δ 2 I ) y = AD 2 r 4 + r 1 or ( min DA T δi ) y ( ) Dr4 2 r 1 /δ D changes each PDCO iteration Increasingly ill-conditioned Akle and Saunders, ICME, Stanford Copper Mountain /31

8 Iterative solvers spd systems least squares Akle and Saunders, ICME, Stanford Copper Mountain /31

9 Iterative solvers for PDCO ( AD 2 A T + δ 2 I ) y = AD 2 r 4 + r 1 CG or MINRES ( min DA T δi ) y ( ) Dr4 2 r 1 /δ LSQR or LSMR Akle and Saunders, ICME, Stanford Copper Mountain /31

10 Iterative solvers for PDCO ( AD 2 A T + δ 2 I ) y = AD 2 r 4 + r 1 CG or MINRES ( min DA T δi ) y ( ) Dr4 2 r 1 /δ LSQR or LSMR Comparisons: Fong and S 2011a,b Fong thesis 2011 Akle and Saunders, ICME, Stanford Copper Mountain /31

11 Backward errors for CG and MINRES on spd Ax = b (A + E k )x k = b r k = b Ax k E k = r kx T k x k 2 E k = r k x k Akle and Saunders, ICME, Stanford Copper Mountain /31

12 Backward errors for CG and MINRES on spd Ax = b (A + E k )x k = b r k = b Ax k E k = r kx T k x k 2 E k = r k x k We know r k for MINRES (but not for CG) x k for CG (Steihaug 1983) x k for MINRES (Fong 2011) Akle and Saunders, ICME, Stanford Copper Mountain /31

13 Backward errors for CG and MINRES on spd Ax = b (A + E k )x k = b r k = b Ax k E k = r kx T k x k 2 E k = r k x k We know r k for MINRES (but not for CG) x k for CG (Steihaug 1983) x k for MINRES (Fong 2011) Plot log 10 E k for CG and MINRES Data: Tim Davis s sparse matrix collection Real spd examples Ax = b that include b Akle and Saunders, ICME, Stanford Copper Mountain /31

14 Backward errors for CG and MINRES when A Name:Simon_raefsky4, Dim:19779x19779, nnz: , id=7 CG MINRES 0 1 Name:Cannizzo_sts4098, Dim:4098x4098, nnz:72356, id=13 CG MINRES log( r / x ) 8 log( r / x ) iteration count x iteration count Name:Schenk_AFE_af_shell8, Dim:504855x504855, nnz: , id=11 1 CG MINRES 2 Name:BenElechi_BenElechi1, Dim:245874x245874, nnz: , id=22 0 CG MINRES log( r / x ) log( r / x ) iteration count iteration count x 10 4 Akle and Saunders, ICME, Stanford Copper Mountain /31

15 Backward errors for LSQR and LSMR Stewart (1977): min (A + E k )x k b E k = r kr T k A r k 2 r k = b Ax k E k = AT r k r k Akle and Saunders, ICME, Stanford Copper Mountain /31

16 Backward errors for LSQR and LSMR Stewart (1977): min (A + E k )x k b E k = r kr T k A r k 2 r k = b Ax k E k = AT r k r k LSQR CG on A T Ax = A T b LSMR MINRES on A T Ax = A T b Hence A T r k for LSMR (but not for LSQR) Akle and Saunders, ICME, Stanford Copper Mountain /31

17 Measure of convergence r k = b Ax k r k ˆr, A T r k 0 min Ax b Akle and Saunders, ICME, Stanford Copper Mountain /31

18 Measure of convergence r k = b Ax k r k ˆr, A T r k 0 min Ax b LSQR r k LSQR log A T r k 1.01 Name:lp fit1p, Dim:1677x627, nnz:9868, id=625 0 Name:lp fit1p, Dim:1677x627, nnz:9868, id= r log A T r iteration count iteration count Akle and Saunders, ICME, Stanford Copper Mountain /31

19 min Ax b Measure of convergence r k = b Ax k r k ˆr, A T r k 0 r k LSQR LSMR log A T r k 1 Name:lp fit1p, Dim:1677x627, nnz:9868, id=625 LSQR LSMR 0 Name:lp fit1p, Dim:1677x627, nnz:9868, id=625 LSQR LSMR r log A T r iteration count iteration count Akle and Saunders, ICME, Stanford Copper Mountain /31

20 Two linear algebra tools colnorms.m Householder QR Akle and Saunders, ICME, Stanford Copper Mountain /31

21 colnorms.m C = = [c 1,..., c n ] (m n) Estimates all c j from p products C T v p n Each v = randn(m,1) If A = C T C, we estimate diag(a) from colnorms(c) 2 Akle and Saunders, ICME, Stanford Copper Mountain /31

22 Householder QR W = = Q ( ) R = ( Y Z ) ( ) R = YR (n k) 0 0 Q = product of( Householder ) transformations ( ) I O Y = Q Z = Q 0 I Q, Y, Z are fast operators if k is small Choose W to be approximate eigenvectors (say) Householder QR orthogonalizes W, represents full Q Akle and Saunders, ICME, Stanford Copper Mountain /31

23 Partial Cholesky or partial QR Akle and Saunders, ICME, Stanford Copper Mountain /31

24 Two ideas for spd Hx = b Part direct, part iterative (hybrid!) Schur complement CG Axelsson 1994 Partial Cholesky preconditioning Bellavia, Gondzio, Morini 2011 Akle and Saunders, ICME, Stanford Copper Mountain /31

25 Two ideas for spd Hx = b Part direct, part iterative (hybrid!) Schur complement CG Axelsson 1994 Partial Cholesky preconditioning Bellavia, Gondzio, Morini 2011 Transform first: Q T HQy = Q T b Q = Householder QR on W W = 10, 20, 50,... approximate eigenvectors of H Two-level (subspace splitting) Schur complement CG Hanke and Vogel 1999 Subspace preconditioned LSQR Jacobsen, Hansen, and S 2003 = partial QR equivalent of partial Cholesky Akle and Saunders, ICME, Stanford Copper Mountain /31

26 Partial Cholesky preconditioning for spd Hx = b Q = [Y Z], Y is n k for small k BGM 2011: Q = permutation (approx diagonal pivoting) Our experiments: Q is from Householder QR on W k eigenvectors Q T HQ = M = ( ) ( ) ( L1 I L T 1 L T ) 2 L 2 I S I ( ) ( ) ( L1 I L T 1 L T ) 2 L 2 I S I S = diag(s) or colnorms gives S diag(s) Akle and Saunders, ICME, Stanford Copper Mountain /31

27 Numerical Results PDCO on LPs, satellite image LP degen LP fit2p Satellite Akle and Saunders, ICME, Stanford Copper Mountain /31

28 MINRES on ( AD 2 A T + δ 2 I ) y = AD 2 r 4 + r 1 at PDCO iteration 1, middle, end Think of systems as Hx = b QHQ T y = Q T b Preconditioners constructed from k = 50 cols of partial Cholesky Q = permutation for diagonal pivoting or Householder QR on k approx eigenvectors Akle and Saunders, ICME, Stanford Copper Mountain /31

29 MINRES on ( AD 2 A T + δ 2 I ) y = AD 2 r 4 + r 1 at PDCO iteration 1, middle, end Think of systems as Hx = b QHQ T y = Q T b Preconditioners constructed from k = 50 cols of partial Cholesky Q = permutation for diagonal pivoting or Householder QR on k approx eigenvectors Schur complement S = C T C approximated by diag(s) or colnorms(c) Akle and Saunders, ICME, Stanford Copper Mountain /31

30 LP degen3, PDCO itn 1 Akle and Saunders, ICME, Stanford Copper Mountain /31

31 LP degen3, PDCO itn 15 Akle and Saunders, ICME, Stanford Copper Mountain /31

32 LP degen3, PDCO itn 32 Akle and Saunders, ICME, Stanford Copper Mountain /31

33 LP fit2p, PDCO itn 1 Akle and Saunders, ICME, Stanford Copper Mountain /31

34 LP fit2p, PDCO itn 12 Akle and Saunders, ICME, Stanford Copper Mountain /31

35 LP fit2p, PDCO itn 26 Akle and Saunders, ICME, Stanford Copper Mountain /31

36 Satellite, PDCO itn 1 Akle and Saunders, ICME, Stanford Copper Mountain /31

37 Satellite, PDCO itn 9 Akle and Saunders, ICME, Stanford Copper Mountain /31

38 Satellite, PDCO itn 17 Akle and Saunders, ICME, Stanford Copper Mountain /31

39 References O. Axelsson (1994), Iterative Solution Methods, Cambridge Univ Press M. Hanke and C. R. Vogel (1999), Two-level preconditioners for regularized inverse problems, Numer Math 83, M. Jacobsen, P. C. Hansen, and M. A. Saunders (2003), Subspace preconditioned LSQR for discrete ill-posed problems, BIT 43:5, S. Bellavia, J. Gondzio, and B. Morini (Jul 2011), New matrix-free preconditioner for sparse least-squares problems, Report ERGO , University of Edinburgh D. C.-L. Fong and M. A. Saunders (Aug 2011), LSMR: An iterative algorithm for sparse least-squares problems, SISC 33, D. C.-L. Fong and M. A. Saunders (Dec 2011), CG versus MINRES: An empirical comparison, for SQU Journal for Science D. C.-L. Fong (Dec 2011), Minimum-Residual Methods for Sparse Least-Squares Using Golub-Kahan Bidiagonalization, PhD thesis, ICME, Stanford University Akle and Saunders, ICME, Stanford Copper Mountain /31

40 Special thanks to Sven Leyffer Conferences really do promote action and creativity! Akle and Saunders, ICME, Stanford Copper Mountain /31

CG and MINRES: An empirical comparison

CG and MINRES: An empirical comparison School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering

More information

CG versus MINRES on Positive Definite Systems

CG versus MINRES on Positive Definite Systems School of Engineering CG versus MINRES on Positive Definite Systems Michael Saunders SOL and ICME, Stanford University Joint work with David Fong Recent Advances on Optimization In honor of Philippe Toint

More information

LSMR: An iterative algorithm for least-squares problems

LSMR: An iterative algorithm for least-squares problems LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG VERSUS MINRES: AN EMPIRICAL COMPARISON VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive

More information

CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG VERSUS MINRES: AN EMPIRICAL COMPARISON VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

Satisfying Flux Balance and Mass-Action Kinetics in a Network of Biochemical Reactions

Satisfying Flux Balance and Mass-Action Kinetics in a Network of Biochemical Reactions Satisfying Flux Balance and Mass-Action Kinetics in a Network of Biochemical Reactions Speakers Senior Personnel Collaborators Michael Saunders and Ines Thiele Ronan Fleming, Bernhard Palsson, Yinyu Ye

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Computing approximate PageRank vectors by Basis Pursuit Denoising

Computing approximate PageRank vectors by Basis Pursuit Denoising Computing approximate PageRank vectors by Basis Pursuit Denoising Michael Saunders Systems Optimization Laboratory, Stanford University Joint work with Holly Jin, LinkedIn Corp SIAM Annual Meeting San

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization. 1 Interior methods for linear optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization. 1 Interior methods for linear optimization Stanford University, Management Science & Engineering and ICME MS&E 318 CME 338 Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 017 Notes 9: PDCO: Primal-Dual Interior Methods 1

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Basis Pursuit Denoising and the Dantzig Selector

Basis Pursuit Denoising and the Dantzig Selector BPDN and DS p. 1/16 Basis Pursuit Denoising and the Dantzig Selector West Coast Optimization Meeting University of Washington Seattle, WA, April 28 29, 2007 Michael Friedlander and Michael Saunders Dept

More information

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of

More information

A convex QP solver based on block-lu updates

A convex QP solver based on block-lu updates Block-LU updates p. 1/24 A convex QP solver based on block-lu updates PP06 SIAM Conference on Parallel Processing for Scientific Computing San Francisco, CA, Feb 22 24, 2006 Hanh Huynh and Michael Saunders

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

Solving the normal equations system arising from interior po. linear programming by iterative methods

Solving the normal equations system arising from interior po. linear programming by iterative methods Solving the normal equations system arising from interior point methods for linear programming by iterative methods Aurelio Oliveira - aurelio@ime.unicamp.br IMECC - UNICAMP April - 2015 Partners Interior

More information

Implementation of a KKT-based active-set QP solver

Implementation of a KKT-based active-set QP solver Block-LU updates p. 1/27 Implementation of a KKT-based active-set QP solver ISMP 2006 19th International Symposium on Mathematical Programming Rio de Janeiro, Brazil, July 30 August 4, 2006 Hanh Huynh

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

AMS subject classifications. 15A06, 65F10, 65F20, 65F22, 65F25, 65F35, 65F50, 93E24

AMS subject classifications. 15A06, 65F10, 65F20, 65F22, 65F25, 65F35, 65F50, 93E24 : AN IERAIVE ALGORIHM FOR SPARSE LEAS-SQUARES PROBLEMS DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. An iterative method is presented for solving linear systems Ax = b and leastsquares problems min

More information

Low-Rank Approximations, Random Sampling and Subspace Iteration

Low-Rank Approximations, Random Sampling and Subspace Iteration Modified from Talk in 2012 For RNLA study group, March 2015 Ming Gu, UC Berkeley mgu@math.berkeley.edu Low-Rank Approximations, Random Sampling and Subspace Iteration Content! Approaches for low-rank matrix

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Key words. convex quadratic programming, interior point methods, KKT systems, preconditioners.

Key words. convex quadratic programming, interior point methods, KKT systems, preconditioners. A COMPARISON OF REDUCED AND UNREDUCED SYMMETRIC KKT SYSTEMS ARISING FROM INTERIOR POINT METHODS BENEDETTA MORINI, VALERIA SIMONCINI, AND MATTIA TANI Abstract. We address the iterative solution of symmetric

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Linear Algebra in IPMs

Linear Algebra in IPMs School of Mathematics H E U N I V E R S I Y O H F E D I N B U R G Linear Algebra in IPMs Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio Paris, January 2018 1 Outline Linear

More information

Continuous methods for numerical linear algebra problems

Continuous methods for numerical linear algebra problems Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on

More information

A MATRIX-FREE PRECONDITIONER FOR SPARSE SYMMETRIC POSITIVE DEFINITE SYSTEMS AND LEAST-SQUARES PROBLEMS

A MATRIX-FREE PRECONDITIONER FOR SPARSE SYMMETRIC POSITIVE DEFINITE SYSTEMS AND LEAST-SQUARES PROBLEMS A MATRIX-FREE PRECONDITIONER FOR SPARSE SYMMETRIC POSITIVE DEFINITE SYSTEMS AND LEAST-SQUARES PROBLEMS STEFANIA BELLAVIA, JACEK GONDZIO, AND BENEDETTA MORINI Abstract. We analyze and discuss matrix-free

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Incomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming

Incomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming Incomplete Factorization Preconditioners for Least Squares and Linear and Quadratic Programming by Jelena Sirovljevic B.Sc., The University of British Columbia, 2005 A THESIS SUBMITTED IN PARTIAL FULFILMENT

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Matrix-Free Interior Point Method

Matrix-Free Interior Point Method Matrix-Free Interior Point Method Jacek Gondzio School of Mathematics and Maxwell Institute for Mathematical Sciences The University of Edinburgh Mayfield Road, Edinburgh EH9 3JZ United Kingdom. Technical

More information

Master Thesis Literature Study Presentation

Master Thesis Literature Study Presentation Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element

More information

Truncated Newton Method

Truncated Newton Method Truncated Newton Method approximate Newton methods truncated Newton methods truncated Newton interior-point methods EE364b, Stanford University minimize convex f : R n R Newton s method Newton step x nt

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

LUSOL: A basis package for constrained optimization

LUSOL: A basis package for constrained optimization LUSOL: A basis package for constrained optimization IFORS Triennial Conference on OR/MS Honolulu, HI, July 11 15, 2005 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

A review of sparsity vs stability in LU updates

A review of sparsity vs stability in LU updates A review of sparsity vs stability in LU updates Michael Saunders Dept of Management Science and Engineering (MS&E) Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS

A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS TIM REES AND CHEN GREIF Abstract. We explore a preconditioning technique applied to the problem of solving linear systems

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Numerical linear algebra

Numerical linear algebra Numerical linear algebra Purdue University CS 51500 Fall 2017 David Gleich David F. Gleich Call me Prof Gleich Dr. Gleich Please not Hey matrix guy! Huda Nassar Call me Huda Ms. Huda Please not Matrix

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Julian Hall, Ghussoun Al-Jeiroudi and Jacek Gondzio School of Mathematics University of Edinburgh

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Algorithms Notes for 2016-10-31 There are several flavors of symmetric eigenvalue solvers for which there is no equivalent (stable) nonsymmetric solver. We discuss four algorithmic ideas: the workhorse

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming

Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming Joo-Siong Chai and Kim-Chuan Toh Abstract We propose to compute

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Rohit Gupta, Martin van Gijzen, Kees Vuik GPU Technology Conference 2012, San Jose CA. GPU Technology Conference 2012,

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

A stable primal dual approach for linear programming under nondegeneracy assumptions

A stable primal dual approach for linear programming under nondegeneracy assumptions Comput Optim Appl (2009) 44: 213 247 DOI 10.1007/s10589-007-9157-2 A stable primal dual approach for linear programming under nondegeneracy assumptions Maria Gonzalez-Lima Hua Wei Henry Wolkowicz Received:

More information

Improving Performance of The Interior Point Method by Preconditioning

Improving Performance of The Interior Point Method by Preconditioning Improving Performance of The Interior Point Method by Preconditioning Mid-Point Status Report Project by: Ken Ryals For: AMSC 663-664 Fall 27-Spring 28 6 December 27 Background / Refresher The IPM method

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Iterative Methods for Smooth Objective Functions

Iterative Methods for Smooth Objective Functions Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods

More information

Combination Preconditioning of saddle-point systems for positive definiteness

Combination Preconditioning of saddle-point systems for positive definiteness Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30 compute iterates with residuals Krylov

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

Preconditioning via Diagonal Scaling

Preconditioning via Diagonal Scaling Preconditioning via Diagonal Scaling Reza Takapoui Hamid Javadi June 4, 2014 1 Introduction Interior point methods solve small to medium sized problems to high accuracy in a reasonable amount of time.

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Optimal Preconditioners for Interval Gauss Seidel Methods

Optimal Preconditioners for Interval Gauss Seidel Methods Optimal Preconditioners for Interval Gauss Seidel Methods R B Kearfott and Xiaofa Shi Introduction Consider the following nonlinear system f 1 x 1, x 2,, x n ) F X) f n x 1, x 2,, x n ) where bounds x

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

Math 310 Final Project Recycled subspaces in the LSMR Krylov method

Math 310 Final Project Recycled subspaces in the LSMR Krylov method Math 310 Final Project Recycled subspaces in the LSMR Krylov method Victor Minden November 25, 2013 1 Introduction In many applications, it is the case that we are interested not only in solving a single

More information

A Stable Primal-Dual Approach for Linear Programming under Nondegeneracy Assumptions

A Stable Primal-Dual Approach for Linear Programming under Nondegeneracy Assumptions A Stable Primal-Dual Approach for Linear Programming under Nondegeneracy Assumptions Maria Gonzalez-Lima Hua Wei Henry Wolkowicz January 15, 2008 University of Waterloo Department of Combinatorics & Optimization

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

Jacobi-Davidson Eigensolver in Cusolver Library. Lung-Sheng Chien, NVIDIA

Jacobi-Davidson Eigensolver in Cusolver Library. Lung-Sheng Chien, NVIDIA Jacobi-Davidson Eigensolver in Cusolver Library Lung-Sheng Chien, NVIDIA lchien@nvidia.com Outline CuSolver library - cusolverdn: dense LAPACK - cusolversp: sparse LAPACK - cusolverrf: refactorization

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

BFGS-like updates of Constraint Preconditioners for sequences of KKT linear systems

BFGS-like updates of Constraint Preconditioners for sequences of KKT linear systems BFGS-like updates of Constraint Preconditioners for sequences of KKT linear systems Valentina De Simone Dept. of Mathematics and Physics Univ. Campania Luigi Vanvitelli valentina.desimone@unina2.it Joint

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview

More information

7.2 Steepest Descent and Preconditioning

7.2 Steepest Descent and Preconditioning 7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider

More information

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe Rank Revealing QR factorization F. Guyomarc h, D. Mezher and B. Philippe 1 Outline Introduction Classical Algorithms Full matrices Sparse matrices Rank-Revealing QR Conclusion CSDA 2005, Cyprus 2 Situation

More information