The antitriangular factorisation of saddle point matrices

Similar documents
On the accuracy of saddle point solvers

Fast Iterative Solution of Saddle Point Problems

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

Structured Preconditioners for Saddle Point Problems

Structured Preconditioners for Saddle Point Problems

Conceptual Questions for Review

Numerical behavior of inexact linear solvers

This can be accomplished by left matrix multiplication as follows: I

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Department of Computer Science, University of Illinois at Urbana-Champaign

Chapter 3 Transformations

Combination Preconditioning of saddle-point systems for positive definiteness

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Matrices, Moments and Quadrature, cont d

Solution of Linear Equations

EECS 275 Matrix Computation

Numerical Methods in Matrix Computations

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

arxiv: v1 [math.na] 5 May 2011

Preconditioners for the incompressible Navier Stokes equations

6.4 Krylov Subspaces and Conjugate Gradients

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

18.06 Quiz 2 April 7, 2010 Professor Strang

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Lecture 4 Orthonormal vectors and QR factorization

Key words. inf-sup constant, iterative solvers, preconditioning, saddle point problems

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

9. Numerical linear algebra background

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Preconditioned inverse iteration and shift-invert Arnoldi method

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations

Numerical Linear Algebra

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

MAT Linear Algebra Collection of sample exams

AMS526: Numerical Analysis I (Numerical Linear Algebra)

7. Symmetric Matrices and Quadratic Forms

Numerical Methods - Numerical Linear Algebra

Preconditioning for Nonsymmetry and Time-dependence

Sparse least squares and Q-less QR

18.06SC Final Exam Solutions

Indefinite and physics-based preconditioning

Numerical Linear Algebra

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

Numerical Methods I Non-Square and Sparse Linear Systems

Lecture 3: QR-Factorization

Matrix decompositions

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

The null-space method and its relationship with matrix factorizations for sparse saddle point systems

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Review problems for MA 54, Fall 2004.

Numerical Analysis Lecture Notes

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Some Preconditioning Techniques for Saddle Point Problems

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Elementary maths for GMT

2. Every linear system with the same number of equations as unknowns has a unique solution.

Incomplete factorization preconditioners and their updates with applications - I 1,2

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

Solving linear equations with Gaussian Elimination (I)

EE731 Lecture Notes: Matrix Computations for Signal Processing

Matrix decompositions

Orthogonal Transformations

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

1 Last time: least-squares problems

Block Bidiagonal Decomposition and Least Squares Problems

Online Exercises for Linear Algebra XM511

Orthogonal iteration to QR

Math Matrix Algebra

Derivation of the Kalman Filter

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Iterative Methods for Solving A x = b

B553 Lecture 5: Matrix Algebra Review

9. Numerical linear algebra background

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

I. Multiple Choice Questions (Answer any eight)

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Iterative methods for Linear System

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Algebra C Numerical Linear Algebra Sample Exam Problems

Solution of indefinite linear systems using an LQ decomposition for the linear constraints Schilders, W.H.A.

MH1200 Final 2014/2015

Transcription:

The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block antitriangular ( Batman ) decomposition for symmetric indefinite matrices. Here we show the simplification of this factorisation for saddle point matrices and demonstrate how it represents the common nullspace method. We show the relation of this factorisation to constraint preconditioning and how it transforms but preserves the block diagonal structure of block diagonal preconditioning. 1 Introduction The antitriangular factorisation of Mastronardi and Van Dooren [9] converts a symmetric indefinite matrix H R p p into a block antitriangular matrix M using orthogonal similarity transforms. The factorisation can be performed in a backward stable manner and linear systems with the block antitriangular matrix may be efficiently solved. Moreover, the orthogonal similarity transforms preserve eigenvalues, and preserve and reveal the inertia of H. Thus, from M one can determine the triple (n,n 0,n + ) of H, where n is the number of negative eigenvalues, n 0 is the number of zero eigenvalues and n + is the number of positive eigenvalues. The antitriangular factorisation takes the form H = QMQ T, Q 1 = Q T, M = 0 0 0 0 0 0 0 Y T 0 0 X Z T 0 Y Z W } n0 }n1 }n2, (1) }n1 where n 1 = min(n,n + ), n 2 = max(n,n + ) n 1, Z R n 1 n 2, W R n 1 n 1 is symmetric, X = ǫll T R n 2 n 2 is symmetric definite whenever n 2 > 0 and Y R n 1 n 1 is nonsingular and antitriangular, so that entries above the main antidiagonal are zero. Additionally, { 1 if n+ > n ǫ = 1 if n > n +. 1

ThematrixM isstrictlyantitriangularwhenevern 2 = 0,1,i.e.,wheneverthe number of positive and negative eigenvalues differs by at most one. However, the bulge X increases in dimension as H becomes closer to definite. In the extreme case that H is symmetric positive (or negative) definite n 0 = n 1 = 0, i.e., X is itself a p p matrix. Accordingly, the antitriangular factorisation is perhaps best suited to matrices that have a significant number of both positive and negative eigenvalues. We emphasise, however, its generality for real symmetric matrices. Saddle point matrices are symmetric and indefinite, so that the antitriangular factorisation can be applied. These matrices arise in numerous applications [1, Section 2] and have the form [ ] } A B T A = } n, (2) B 0 m wherea R n n issymmetricandb R m n. ThematrixAisnonsingularwith n positive eigenvalues and m negative eigenvalues when A is positive definite on the nullspace of B and B has full rank. We only consider this most common situation here. The algorithm for computing an antitriangular factorisation proposed by Mastronardi and Van Dooren is designed to be applicable to all symmetric indefinite matrices and so is unduly complicated for the special case of saddle point matrices. In this note we describe a simple procedure for obtaining an antitriangular factorisation of saddle point matrices that involves only permutation matrices and a QR factorisation of B T. This QR factorisation of B allows us to show that solving a saddle point system with the antitriangular factorisation is equivalent to applying the nullspace method [1, Section 6][13, Section 15.2]. In other words, the factorisation proposed by Mastronardi and Van Dooren allows the nullspace method to be represented not just as a procedure but also as a matrix decomposition, similarly to other well known methods for solving linear systems like Gaussian elimination. If the matrix A is large, we may solve the saddle point system by an iterative method rather than by a direct method like the antitriangular factorisation. Preconditioning is usually required, with block preconditioners, such as block diagonal and constraint preconditioners popular choices for saddle point systems. We show that the same orthogonal transformation matrix that converts A into an antitriangular matrix can be applied to these preconditioners and that relevant structures are preserved. Throughout,we use Matlab notation to denote submatrices. Thus K(q : r, s : t) is the submatrix of K comprising the intersection of rows q to r with columns s to t. 2 An antitriangular factorisation for saddle point matrices We are interested in applying orthogonal transformations to the saddle point matrix A in (2) to obtain the antitriangular matrix M in (1). Since A is 2

nonsingular with n positive eigenvalues and m negative eigenvalues, in this case the antitriangular matrix has the specific form } 0 0 Y T } m M = 0 X Z T } n m, Y Z W m where Y R m m is antitriangular, X R (n m) (n m) is symmetric positive definite and W R m m is symmetric. Because of its nonzero profile, this factorisation is also called the Batman factorisation. We note that linear systems with M can be solved with the obvious antitriangular substitution (finding the last variable from the first equation, the second-last variable from the second equation and so forth) twice, with a solve with the positive definite matrix X (using, for example, a Cholesky factorisation) in between. The algorithm of Mastronardi and Van Dooren proceeds outwards from the (1,1) entry of H. Thus, at the k-th step we use the factorisation M (k) = Q (k) H (k) (Q (k) ) T of H (k) = H(1 : k,1 : k) to obtain an antitriangular factorisation of H (k+1) = H(1 : k + 1,1 : k + 1), where Q (k) R k k is orthogonal and 0 0 Y T M (k) = 0 X Z T. Y Z W However, the inertia of H (k) and H (k+1) must differ and as a result a number of different cases must be examined at each step. In particular, updating X so that it remains positive or negative definite can be rather involved. An additional issue when applying this factorisation to saddle point matrices is that the algorithm in [9] ignores the zero (2,2) block entirely. We show here that for saddle point matrices a far simpler approach can be taken to achieve an antitriangular factorisation. As a first step, we use the matrix [ ] Q T 0 Im 1 = I n 0 to permute the blocks of A so that the zero matrix is now in the top left corner: Q T 1AQ 1 = [ 0 B B T A ] } m }n. Our next task is to make the matrix B T antitriangular. To achieve this we compute the (full) QR decomposition of B T, B T = [ [ ] ] R U 1 U 2, }{{} 0 U where U R n n and R R m m. Note that the columns of U 1 form an orthonormal basis for the range of B while the columns of U 2 form an orthonormal basis for its nullspace. 3

Next, we embed this orthogonal U in a larger matrix [ ] Q T Im 0 2 = 0 U T which we apply to our permuted saddle point matrix to give Q T 2Q T 1AQ 1 Q 2 = 0 R T 0 R Â11 0 Â T 12 Â 12 Â 22 } m } m } n m, (3) where Âij = Ui TAU j, i,j = 1,2. Note that since A is positive definite on the nullspace of B, and the columns of U 2 form a basis for this nullspace, Â 22 is positive definite. To obtain the desired antitriangular form, all that remains is to permute the last n m rows and columns so that the R matrix is transformed to an antitriangular matrix that sits in the last m rows. One matrix that accomplishes this is I m 0 0 Q T 3 = 0 0 I n m, 0 S m 0 where S m = 1 is the reverse identity matrix, also known as the standard involuntary matrix or flip matrix, of dimension m. We drop the subscripts on the identity and reverse identity matrices when the dimension is clear from the context. Note that S 1 = S T = S. Application of Q 3 then gives the desired antitriangularisation:... 1 0 0 Y T M = Q T AQ = 0 X Z T, Q = Q 1 Q 2 Q 3 = Y Z W [ ] 0 U2 U 1 S m, (4) I m 0 0 where Y = SR R m m, X = Â22 R (n m) (n m), Z = SÂ12 and W = SÂ11S. The matrices X and W are symmetric and X = Â22 is positive definite. Thus, a block antitriangularisation of A can be computed using only permutations and a QR decomposition of B T. The method is clearly backward stable provided the QR decomposition is computed in a backward stable manner, say by Householder transformations or Givens rotations. In contrast, if we were to begin at the upper-left corner of A and proceed outwards as is done in [9] for general symmetric indefinite matrices, we would need to separate out the positive definite part of A, i.e., its projection onto the nullspace of B, by updating a Cholesky factorisation at each stage. We would then require orthogonal 4

transformations to convert B to upper triangular form and move the zero block to the upper left corner. Another benefit of this simpler approach is that it provides an orthonormal basis of the nullspace of B. This allows us to link the antitriangular factorisation with the nullspace method and with preconditioning, as we show in subsequent sections. 3 Comparison with the nullspace method In the previous section we saw that an antitriangular factorisation for saddle point matrices can be easily computed. In this section we show that using this factorisation to solve the linear system [ A B T Ax = B 0 ][ ] u = p [ ] f, (5) g is equivalent to applying the nullspace method. The nullspace method requires a basis for the nullspace of B, such as U 2, and a particular solution û of Bu = g. (Note that any basis of the nullspace of B in general would be sufficient but we use U 2 here as this common choice corresponds to the antitriangular factorisation.) With these two quantities, the nullspace method proceeds as follows [1, Section 6][13, Section 15.2]: 1. Solve U T 2 AU 2 v = U T 2 (f Aû); 2. Set u = U 2 v +û; 3. Solve BB T p = B(f Au ), then (u,p ) solves (5). On the other hand, applying the antitriangularisation (4) to (5) gives (Q T AQ)(Q T x) = M(Q T x) = (Q T b) or To recover u and p we must solve 0 0 Y T p g 0 X Z T U2 T u = U T Y Z W SU1 T 2 f. u SU1 T f Y T SU T 1 u = g XU T 2 u+z T SU T 1 u = U T 2 f Yp+ZU T 2 u+wsu T 1 u = SU T 1 f, (6a) (6b) (6c) using the antitriangular substitution described in the previous section. This is equivalent to applying the inverse of M, which has upper block antitriangular 5

structure, that is, Y 1 (ZX 1 Z T W)Y T Y 1 ZX 1 Y 1 M 1 = X 1 Z T Y T X 1 0 Y T 0 0 } m } n m } m. We now show that solving (6a) (6c) is equivalent to applying the nullspace method. Substituting for Y = SR in (6a), we find that R T S T SU1 T u = R T U1 T u = [ ] R T 0 T][ U1 T U2 T u = Bu = g. Moreover, g = B(UU T )u = B(U 1 U T 1 +U 2 U T 2 )u = BU 1 U T 1 u since the columns of U 2 span the kernel of B. Thus, a particular solution of Bu = g is û = U 1 U T 1 u. Let us now examine (6b). Since X = Â22 and Z = SÂ12, where Âij = U T i AU j, i,j = 1,2 as before, we have that XU T 2 u+z T SU T 1 u = U T 2 f (U T 2 AU 2 )(U T 2 u)+u T 2 AU 1 S T SU T 1 u = U T 2 f (U T 2 AU 2 )(U T 2 u) = U T 2 (f AU 1 U T 1 u). Thus, U T 2 AU 2 v = U T 2 (f Aû), where v = U T 2 u. Substituting for û, v and W = SÂ11S in (6c) then gives that Yp+ZU T 2 u+wsu T 1 u = SU T 1 f SRp+SU T 1 AU 2 U T 2 u+su T 1 AU 1 S T SU T 1 u = SU T 1 f Rp = U T 1 [f A(U 2 v +û)] R T Rp = [ R T 0 T][ U T 1 U T 2 BB T p = B(f Au ), ] [f A(U 2 v +û)] where u = U 2 v +û. Thus, solving a system with the antitriangular factorisation is equivalent to applyingthenullspacemethodwiththeqrnullspacebasisandwithû = U 1 U1 T u and v = U2 T u. It is straightforward to show that u = U 2 v +û = (U 2 U T 2 +U 1 U T 1 )u = [ U 1 U 2 ] [ U T 1 U T 2 ] u = u as we would expect, while p coincides with the vector computed in step 3 in the nullspace method. Note that no antitriangular solves are required in the nullspace method, even though we are solving a linear system with a block antitriangular matrix. 6

This is because the permutation matrix S that transforms the upper triangular matrix R to antitriangular form occurs as S 2 = I in (6a) and can be eliminated from (6c). We have seen that the factorisation of Mastronardi and Van Dooren allows us to view the nullspace method as a factorisation rather than as a procedure. It also puts the nullspace method into the same framework as other direct solvers, such as Gaussian elimination, which can be written as the product of structured matrices. This alternative viewpoint may be useful. Application of the nullspace method is only viable when it is not too costly to solve linear systems with U T 2 AU 2. Otherwise, iterative methods may be more attractive. In the next section we show that constraint preconditioners for iterative methods applied to saddle point systems can also be related to the factorisation (4). 4 The antitriangular factorisation and preconditioning When the saddle point system (5) is too large to be solved by a direct method like the antitriangular factorisation or the nullspace method, an iterative method such as a Krylov subspace method is usually applied. Unfortunately, however, these iterative methods typically converge slowly when applied to saddle point problems unless preconditioners are used. Many preconditioners for saddle point matrices have been proposed [1, Section 10][2], but we focus here on block preconditioners and show how they can be factored by the antitriangular factorisation in Section 2. We first discuss block diagonal preconditioners and then describe constraint preconditioners, showing that in this latter case the same orthogonal transformation converts A and P to antitriangular form. We assume throughout that A in (5) is factorised in antitriangular form (4), i.e., that A = QMQ T. We briefly mention the block diagonal matrix P D = [ T 0 0 V where T R n n and V R m m are symmetric. Often T is chosen to approximate A and V to approximate the Schur complement BA 1 B T. Indeed, if T = A and V = BA 1 B T then P 1 D A has three eigenvalues, 1 and (1± 5)/2 [7, 10]. Applying Q in a similarity transform gives V 0 0 Q T P D Q = 0 T22 TT 12 S, 0 S T 12 S T 11 S where T ij = U T i TU j, i,j = 1,2. Thus, the transformed preconditioner is also block diagonal, with an m m block followed by an n n block. Note that ], 7

since P D is positive definite, Q T P D Q can not have significant block antidiagonal structure. Constraint preconditioners [6, 8, 11, 12] [ ] T B T P C =, (7) B 0 on the other hand, preserve the constraints of A exactly but replace A by a symmetric approximation T. The preconditioned matrix P 1 A has at least 2m unit eigenvalues, with the remainder being the eigenvalues λ of U T 2 AU 2 v = λu T 2 TU 2 v [6, 8]. (Note that we could use any basis for the nullspace of B in place of U 2.) Since A is positive definite on the nullspace of B, any non-unit eigenvalues are real, although negative eigenvalues will occur when U T 2 TU 2 is not positive definite. Precisely because the constraints are preserved, 0 0 R T S Q T P C Q = 0 T22 T12 S, (8) SR S T 12 S T 11 S where T ij = Ui TTU j, i,j = 1,2, is an antitriangular matrix when T 22 is positive definite. Indeed, Keller et al. [6] use a flipped version of (3) to investigate the eigenvalues of P 1 C A. The matrix QT P C Q may be applied using the procedure outlined in Section 3, and it makes clear the equivalence between constraint preconditioners and the nullspace method that has previously been observed [5, 8, 14]. Conversely, any matrix N in antitriangular form with Y = SR defines a constraint preconditioner QNQ T for A. We note that alternative factorisations of constraint preconditioners, some of which rely on a basis for the nullspace of B, have also been proposed. In particular, the Schilders factorisation [3, 4, 15] re-orders the variables so that B = [B 1 B 2 ], with B 1 R m m nonsingular. In this case [ B 1 N = 1 B ] 2 I is a basis for the nullspace, and is important to the factorisation. For a true, inertia-revealing antitriangular factorisation T must be positive definite on the nullspace of B, so that T 22 is symmetric positive definite. If we are only interested in preconditioning (and not in the inertia of P) we only require that T 22 is invertible, although the spectrum may be less favourable in this case. In summary, we see that applying the same orthogonal similarity transform that makes A antitriangular to P D and P C results in preconditioners with specific structures. The antitriangular form of P C makes the equivalence between constraint preconditioners and the nullspace method clear, and may provide other insights into the properties of constraint preconditioners. 8

5 Conclusions We have considerably simplified the antitriangular factorisation for symmetric indefinite matrices of Mastronardi and Van Dooren in the specific and common case of saddle point matrices. This leads to the observation that this factorisation is equivalent to the well known nullspace method. Additionally, we have considered the form of this antitriangular factorisation for popular constraint preconditioning and block diagonal preconditioning, showing how specific structures are preserved. Acknowledgements The authors would like to thank Gil Strang for inspiring this work and for valuable suggestions. This publication was based on work supported in part by Award No KUK-C1-013-04, made by King Abdullah University of Science and Technology (KAUST). References [1] M. Benzi, G. H. Golub, and J. Liesen. Numerical solution of saddle point problems. Acta Numer., 14:1 137, 2005. [2] M. Benzi and A. J. Wathen. Some preconditioning techniques for saddle point problems. In W. H. A. Schilders, H. A. van der Vorst, and J. Rommes, editors, Model Order Reduction: Theory, Research Aspects and Applications, volume 13 of Mathematics in Industry, pages 195 211. Springer-Verlag Berlin Heidelberg, 2008. [3] H. S. Dollar, N. I. M. Gould, W. H. A. Schilders, and A. J. Wathen. Implicit-factorization preconditioning and iterative solvers for regularized saddle-point systems. SIAM J. Matrix Anal. Appl., 28:170 189, 2006. [4] H. S. Dollar and A. J. Wathen. Approximate factorization constraint preconditioners for saddle-point problems. SIAM J. Sci. Comput., 27:1555 1572, 2006. [5] N. I. M. Gould, M. E. Hribar, and J. Nocedal. On the solution of equality constrained quadratic programming problems arising in optimization. SIAM J. Sci. Comput., 23:1376 1395, 2001. [6] C. Keller, N. I. M. Gould, and A. J. Wathen. Constraint preconditioning for indefinite linear systems. SIAM J. Matrix Anal. Appl., 21:1300 1317, 2000. [7] Y. A. Kuznetsov. Efficient iterative solvers for elliptic finite element problems on nonmatching grids. Russ. J. Numer. Anal. Math. Modelling, 10:187 211, 1995. [8] L. Lukšan and J. Vlček. Indefinitely preconditioned inexact Newton method for large sparse euality constrained non-linear programming problems. Numer. Linear Algebr. Appl., 5:219 247, 1998. 9

[9] N. Mastronardi and P. Van Dooren. The antitriangular factorization of symmetric matrices. SIAM J. Matrix Anal. Appl., 34:173 196, 2013. [10] M. F. Murphy, G. H. Golub, and A. J. Wathen. A note on preconditioning for indefinite linear systems. SIAM J. Sci. Comput., 21:1969 1972, 2000. [11] B. A. Murtagh and M. A. Saunders. Large-scale linearly constrained optimization. Math. Program., 14:41 72, 1978. [12] B. A. Murtagh and M. A. Saunders. A projected Lagrangian algorithm and its implementation for sparse nonlinear constraints. Math. Program. Stud., 16:84 117, 1982. [13] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, New York, NY, 1999. [14] I. Perugia, V. Simoncini, and M. Arioli. Linear algebra methods in a mixed approximation of magnetostatic problems. SIAM J. Sci. Comput., 21(3):1085 1101, 1999. [15] W. H. A. Schilders. A preconditioning technique for indefinite systems arising in electronic circuit simulation. Talk at the one-day meeting on preconditioning methods for indefinite linear systems, December 9, 2002. 10