Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Size: px
Start display at page:

Download "Linear algebra issues in Interior Point methods for bound-constrained least-squares problems"

Transcription

1 Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek Gondzio, and Benedetta Morini, Bologna, 6th-7th Mar 2008

2 Outline Introduction 1 Introduction The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase 2 3 Iterative Linear Algebra The preconditioner Spectral properties PPCG 4

3 The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase Bound Constrained Least-Squares Problems min l x u q(x) = 1 2 Ax b µ x 2 Vectors l (IR ) n and u (IR ) n are lower and upper bounds on the variables. A IR m n, b IR m, µ 0 are given and m n. We expect A to be large and sparse. We allow the solution x to be degenerate:. x i = l i or x i = u i, q i (x ) = 0, for some i, 1 i n

4 The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase We limit the presentation to NNLS problems: min x 0 q(x) = 1 2 Ax b 2 2 We assume A has full column rank there is a unique solution x. Let g(x) = q(x) = A T (Ax b) and D(x) be the diagonal matrix with entries: { xi if g d i (x) = i (x) 0 1 otherwise The core of our procedure is an Inexact Newton-like method applied to the First Order Optimality condition for NNLS: D(x)g(x) = 0

5 The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase Inexact Newton Interior Point methods for D(x)g(x) = 0 [Bellavia, Macconi, Morini, NLAA, 2006] The method uses ideas of [Heinkenschloss, Ulbrich, Ulbrich, Math. Progr., 1999] Let E(x) be the diagonal positive semidefinite matrix with entries: { gi (x) if 0 g e i (x) = i (x) < xi 2 or g i (x) 2 > x i 0 otherwise. Let W (x) and S(x) be the diagonal matrices W (x) = (E(x) + D(x)) 1 S(x) = (W (x)d(x)) 1 2 Note that (S(x)) 2 i,i (0, 1] and (W (x)e(x)) i,i [0, 1).

6 k-th iteration Introduction The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase Solve the s.p.d. system: Z k p k = S k g k + r k, r k η k W k D k g k where η k [0, 1) and Z k Z(x k ) is given by: Z k = Sk T (AT A + D 1 k E k)s k = Sk T AT AS k + W k E k Form the step p k = S k p k Project it onto an interior of the positive orthant: ˆp k = max{σ, 1 P(x k + p k ) x k } (P(x k + p k ) x k ), where σ (0, 1) is close to one.

7 The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase Globalization Phase Set: x k+1 = x k + (1 t)ˆp k + tp C k t [0, 1) where pk C is a constrained Cauchy step. t is chosen to guarantee a sufficient decrease of the objective function q(x). Strictly positive iterates Eventually t = 0 is taken up to quadratic convergence can be obtained without assuming strict complementarity at x.

8 The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase The Linear Algebra Phase: normal equations The system Z k p k = S k g k represents the normal equations for the least-squares problem min p IR B δ p + h n with B δ = ( ASk W 1 2 k E 1 2 k ) ( Axk b, h = 0 ).

9 The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase The Linear Algebra Phase: augmented system The step p k can be obtained solving: ( ) ( ) ( I ASk qk (Axk b) S k A T = W k E k p k 0 }{{} H δ ) Note that W k E k is positive semidefinite and where 1 > δ = min i (w k e k ) i. v T W k E k v δv T v, v IR n,

10 Conditioning issues The problem The Inexact Interior Point Framework Focus on the Linear Algebra Phase Let 0 < σ 1 σ 2... σ n, be the singular values of AS k Assume σ 1 << 1. If δ = 0 then κ 2 (H 0 ) 1 + σ n σ 2 1 κ 2 (B 0 ) 1 + σ n σ 1, i.e. κ 2 (H 0 ) may be much greater than κ 2 (B 0 ). If δ > 0 (regularized system), then κ 2 (H δ ) 1 + σ n δ κ 2 (B δ ) 1 + σ n δ, i.e. If δ > σ 1 : κ 2 (H δ ) (κ 2 (B δ )) may be considerably smaller than κ 2 (H 0 ) (κ 2 (B 0 ))

11 The Regularized I.P. Newton-like method If σ 1 is not small, the regularization does not deteriorate κ 2 (H δ ) with respect to κ 2 (H 0 ). Clear benefit from regularization ( see also [Saunders, BIT, 1995][Silvester and Wathen, SINUM, 1994] ) Modification of the Affine Scaling I.P. method: where Z k p k = S k g k + r k Z k = Sk T (AT A + D 1 k = Sk T AT AS k + W k E k } {{ } Z k and k is diagonal with entries in [0, 1). E k + k )S k 2 + k Sk

12 Fast convergence of the method is preserved (in presence of degeneracy, too) The globalization strategy of [BMM] can be applied with slight modifications. The least square problem and the augmented system are regularized: ( ) ASk B δ = where C 1 2 k ( ) I ASk H δ = S k A T C k C k = W k E k + k S 2 k

13 Features of the method Let τ (0, 1) be a small positive threshold and I k = {i {1, 2,..., n}, s.t. (s 2 k ) i 1 τ}, A k = {1, 2,..., n}/i k, n 1 = card(i k ), then S k = diag((s k ) I, (S k ) A ) Note that S 2 k + W ke k = I. When x k converges to x, lim k (S k ) I = I, lim k (S k ) A = 0. lim k (W k E k ) I = 0, lim k (W k E k ) A = I. I k is expected to eventually settle down (inactive components and possibly degenerate components)

14 Solving the augmented system The preconditioner Spectral properties PPCG The following partition on the augmented system is induced: I A I (S k ) I A A (S k ) A (S k ) I A T I (C k ) I 0 q k ( p k ) I = (Ax k b) 0 (S k ) A A T A 0 (C k ) A ( p k ) A 0 Eliminating ( p k ) A we get ( I + Qk A I (S k ) I } (S k ) I A T I (C k ) I {{ } H k H k IR (m+n 1) (m+n 1 ) ) ( qk ( p k ) I ) ( (Axk b) = 0 )

15 The Preconditioner The preconditioner Spectral properties PPCG Note that ( H k = I A I (S k ) I ) } (S k ) I A T I ( k Sk 2) I {{ } P k ( Qk (W k E k ) I ) where Q k = A A (S k C 1 k S k ) A A T A When x k converges to x, (S k ) A 0, (C k ) A I, then lim (Q k) = 0, k lim (W ke k ) I = 0. k

16 The preconditioner Spectral properties PPCG Factorization of the Preconditioner ( P k = I A I (S k ) I ) (S k ) I A T I ( k Sk 2) I can be factorized as ( ) ( ) ( I 0 I AI I 0 P k = 0 (S k ) I A T I ( k ) I 0 (S k ) I }{{} Π k ) If I k and k remain unchanged for a few iterations, the factorization of matrix Π k does not have to be updated. I k is expected to eventually settle down.

17 Eigenvalues Introduction The preconditioner Spectral properties PPCG P 1 k H k has at least m n + n 1 eigenvalues at 1 the other eigenvalues are positive and of the form λ = 1 + µ, µ = ut Q k u + v T (W k E k ) I v u T u + v T ( k S 2 k ) I v, where (u T, v T ) T is an eigenvector associated to λ. if µ is small: the eigenvalues of P 1 k H k are clustered around one. This is the case when x k is close to the solution.

18 Eigenvalues (x k far away from x ) The preconditioner Spectral properties PPCG The eigenvalues of P 1 k H k have the form λ = 1 + µ and If ( k ) i,i = δ > 0 for i I k, If ( k ) i,i = µ A A(S k ) A 2 τ + τ δ(1 τ) { (wk ) i (e k ) i for i I k and (w k ) i (e k ) i 0 δ > 0 for i I k and (w k ) i (e k ) i = 0 µ A A(S k ) A 2 τ τ Better distribution of the eigenvalues. A scaling of A at the beginning of the process is advisable.

19 The preconditioner Spectral properties PPCG Solving the augmented system by PPCG We can adopt the Projected Preconditioned Conjugate-Gradient (PPCG) [Gould, 1999], [Dollar, Gould, Schilders, Wathen, SIMAX, 2006] It is a CG procedure for solving indefinite systems: ( ) ( ) ( ) H A p g A T = C q 0 with H IR m m symmetric, C IR n n (n m) symmetric, A IR m n full rank, using preconditioners of the form: ( ) G A A T T with G symmetric, T nonsingular.

20 The preconditioner Spectral properties PPCG When C is nonsingular, PPCG is equivalent to applying PCG to the system (H + AC 1 A T )p = g with preconditioner: G + AT 1 A T In our case, it is equivalent to applying PCG to the system: (I + Q k + A I (S k C 1 k S k ) I A T I ) q k = (Ax k b), }{{} F k using ther preconditioner G k = I + A I ( k ) 1 I AT I,

21 Eigenvalues of G 1 k F k The preconditioner Spectral properties PPCG. If ( k ) i,i = (w k ) i (e k ) i for i I k, then the eigenvalues of F k satisfy: G 1 k τ λ 1 + A A(S k ) A 2. τ Drawback: Differently from the previous results, no cluster of eigenvalues at 1 is guaranteed Advantage: PPCG is characterized by a minimization property and requires a fixed amount of work per iteration

22 Implementation issues Dynamic regularization: ( k ) i,i = { 0, if i Ik (i.e.(w k ) i (e k ) i > τ) min{ max{10 3, (w k ) i (e k ) i }, 10 2 }, otherwise. Iterative solver: PPCG with adaptive choice of the tolerance in the stopping criterion. Linear systems are solved with accuracy that increases as the solution is approached. PPCG is stopped when the preconditioned residual drops below tol = max(10 7, η k W k D k g k A T S k 1 ).

23 To avoid preconditioner factorizations: at iteration k + 1 freeze the set I k and the matrix k if #(IT PPCG) k 30 & card(i k+1 ) card(i k ) 10. If I k is empty (i.e. S k 1 τ): we apply PCG to the normal system (S T k A T AS k + C k ) p k = S k A T (Ax k b).

24 Matlab code, ɛ m = The threshold τ is set to 0.1 Initial guess x 0 = (1,..., 1) T. Succesfull termination: q k 1 q k < ɛ (1 + q k 1 ), x k x k 1 2 ɛ ( 1 + x k 2 ) P(x k g k ) x k 2 < ɛ 1 3 ( 1 + g k 2 ) or with ɛ = P(x k g k ) x k 2 ɛ A failure is declared after 100 iterations.

25 Test Problems The matrix A is the transpose of the matrices in the LPnetlib subset of The University of Florida Sparse Matrix Collection. We discarded the matrices with m < 1000 and the matrices that are not full rank. A total of 56 matrices. Dimensions ranges up to 10 5 The vector b is set equal to b = A(1, 1,..., 1) T When A 1 > 10 3, we scaled the matrix using a simple row and column scaling scheme.

26 Numerical Results On a total of 56 test problems we succesfully solve 51 tests: 41 test problems are solved with less than 20 nonlinear iterations. In 40 tests the average number of PPCG iterations does not exceed 40. In 8 tests the solution is the null vector. At each iteration I k =, S T k AT AS k + C k I and the convergence of the linear solver is very fast.

27 Savings in the number of preconditioner factorizations

28 Percent Reduction in the dimension n We solve augmented system of reduced dimension m + n 1

29 Future work More experimentation, using also QMR and GMRES Develop a code for the more general problem: min l x u q(x) = 1 2 Ax b µ x 2 If µ > 0: A may also be rank deficient the augmented systems are regularized naturally Comparison with existing codes (e.g. BCLS (Fiedlander), PDCO (Saunders))

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Key words. convex quadratic programming, degeneracy, Interior Point methods, Inexact Newton methods, global convergence.

Key words. convex quadratic programming, degeneracy, Interior Point methods, Inexact Newton methods, global convergence. AN INTERIOR POINT NEWTON-LIKE METHOD FOR NONNEGATIVE LEAST SQUARES PROBLEMS WITH DEGENERATE SOLUTION STEFANIA BELLAVIA, MARIA MACCONI AND BENEDETTA MORINI Abstract. An interior point approach for medium

More information

Trust-region methods for rectangular systems of nonlinear equations

Trust-region methods for rectangular systems of nonlinear equations Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini

More information

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Julian Hall, Ghussoun Al-Jeiroudi and Jacek Gondzio School of Mathematics University of Edinburgh

More information

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini Indefinite Preconditioners for PDE-constrained optimization problems V. Simoncini Dipartimento di Matematica, Università di Bologna, Italy valeria.simoncini@unibo.it Partly joint work with Debora Sesana,

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization

Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties V. Simoncini Dipartimento di Matematica, Università di ologna valeria@dm.unibo.it 1 Outline

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Experiments with iterative computation of search directions within interior methods for constrained optimization

Experiments with iterative computation of search directions within interior methods for constrained optimization Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS

A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS TIM REES AND CHEN GREIF Abstract. We explore a preconditioning technique applied to the problem of solving linear systems

More information

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy

Interior-Point Methods as Inexact Newton Methods. Silvia Bonettini Università di Modena e Reggio Emilia Italy InteriorPoint Methods as Inexact Newton Methods Silvia Bonettini Università di Modena e Reggio Emilia Italy Valeria Ruggiero Università di Ferrara Emanuele Galligani Università di Modena e Reggio Emilia

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

An Inexact Cayley Transform Method For Inverse Eigenvalue Problems

An Inexact Cayley Transform Method For Inverse Eigenvalue Problems An Inexact Cayley Transform Method For Inverse Eigenvalue Problems Zheng-Jian Bai Raymond H. Chan Benedetta Morini Abstract The Cayley transform method is a Newton-like method for solving inverse eigenvalue

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

Key words. convex quadratic programming, interior point methods, KKT systems, preconditioners.

Key words. convex quadratic programming, interior point methods, KKT systems, preconditioners. A COMPARISON OF REDUCED AND UNREDUCED SYMMETRIC KKT SYSTEMS ARISING FROM INTERIOR POINT METHODS BENEDETTA MORINI, VALERIA SIMONCINI, AND MATTIA TANI Abstract. We address the iterative solution of symmetric

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems A Tuned Preconditioner for Applied to Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University Park

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Dipartimento di Energetica Sergio Stecco, Università di Firenze, Pubblicazione n. 2/2011.

Dipartimento di Energetica Sergio Stecco, Università di Firenze, Pubblicazione n. 2/2011. A preconditioning framewor for sequences of diagonally modified linear systems arising in optimization Stefania Bellavia, Valentina De Simone, Daniela di Serafino,3, Benedetta Morini Dipartimento di Energetica

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

IP-PCG An interior point algorithm for nonlinear constrained optimization

IP-PCG An interior point algorithm for nonlinear constrained optimization IP-PCG An interior point algorithm for nonlinear constrained optimization Silvia Bonettini (bntslv@unife.it), Valeria Ruggiero (rgv@unife.it) Dipartimento di Matematica, Università di Ferrara December

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Politecnico di Torino Porto Institutional Repository [Article] Constrained dogleg methods for nonlinear systems with simple bounds Original Citation: Bellavia S., Macconi M., Pieraccini S. (2012). Constrained

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems Zheng-jian Bai Benedetta Morini Shu-fang Xu Abstract The purpose of this paper is to provide the convergence theory

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems On affine scaling inexact dogleg methods for bound-constrained nonlinear systems Stefania Bellavia, Sandra Pieraccini Abstract Within the framewor of affine scaling trust-region methods for bound constrained

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence. STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini Iterative solvers for saddle point algebraic linear systems: tools of the trade V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem

More information

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES INCOMPLEE FACORIZAION CONSRAIN PRECONDIIONERS FOR SADDLE-POIN MARICES H. S. DOLLAR AND A. J. WAHEN Abstract. We consider the application of the conjugate gradient method to the solution of large symmetric,

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

The convergence of stationary iterations with indefinite splitting

The convergence of stationary iterations with indefinite splitting The convergence of stationary iterations with indefinite splitting Michael C. Ferris Joint work with: Tom Rutherford and Andy Wathen University of Wisconsin, Madison 6th International Conference on Complementarity

More information

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

7.4 The Saddle Point Stokes Problem

7.4 The Saddle Point Stokes Problem 346 CHAPTER 7. APPLIED FOURIER ANALYSIS 7.4 The Saddle Point Stokes Problem So far the matrix C has been diagonal no trouble to invert. This section jumps to a fluid flow problem that is still linear (simpler

More information

Indefinite and physics-based preconditioning

Indefinite and physics-based preconditioning Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

DIPARTIMENTO DI MATEMATICA

DIPARTIMENTO DI MATEMATICA DIPARTIMENTO DI MATEMATICA Complesso universitario, via Vivaldi - 81100 Caserta ON MUTUAL IMPACT OF NUMERICAL LINEAR ALGEBRA AND LARGE-SCALE OPTIMIZATION WITH FOCUS ON INTERIOR POINT METHODS Marco D Apuzzo

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

UPDATING CONSTRAINT PRECONDITIONERS FOR KKT SYSTEMS IN QUADRATIC PROGRAMMING VIA LOW-RANK CORRECTIONS

UPDATING CONSTRAINT PRECONDITIONERS FOR KKT SYSTEMS IN QUADRATIC PROGRAMMING VIA LOW-RANK CORRECTIONS UPDATING CONSTRAINT PRECONDITIONERS FOR KKT SYSTEMS IN QUADRATIC PROGRAMMING VIA LOW-RANK CORRECTIONS STEFANIA BELLAVIA, VALENTINA DE SIMONE, DANIELA DI SERAFINO, AND BENEDETTA MORINI Abstract This wor

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information