Error Estimation and Evaluation of Matrix Functions
|
|
- Nelson Harper
- 5 years ago
- Views:
Transcription
1 Error Estimation and Evaluation of Matrix Functions Bernd Beckermann Carl Jagels Miroslav Pranić Lothar Reichel UC3M, Nov. 16, 2010
2 Outline: Approximation of functions of matrices: f(a)v with A large and sparse. Polynomial approximation: Reduction to small problem by Arnoldi. Error bounds via the Faber transform. Rational approximation: Reduction to small problem by rational Arnoldi. A symmetric, one or several distinct poles: Derivation of short recursion formulas. Computed examples.
3 Approximation of functions of matrices A R n n large, sparse or structured. f nonlinear, v unit vector. Approximate w := f(a)v. Examples: f(t) = exp(t), f(t) = t, f(t) = ln(t).
4 If A small, then several approaches are possible, including the use of the spectral factorization and A = SΛS 1, Λ = diag[λ 1,λ 2,...,λ n ] f(a)v = Sf(Λ)S 1 v, f(λ) = diag[f(λ 1 ),f(λ 2 ),...,f(λ n )]. Using other factorizations (Schur, Cholesky), squaring and scaling also options. Reference for small problems: N. J. Higham, Functions of Matrices, SIAM, If A large, then first reduce to small matrix.
5 Polynomial Approximation m steps of the Arnoldi process with initial vector v gives AV m = V m H m + g m e T m, where V m = [v 1,v 2,...,v m ] R n m, v 1 = v, VmV T m = I, H m = VmAV T m Hessenberg, Vmg T m = 0, e m = [0,...,0, 1] T R m.
6 Define the Krylov subspace: K m (A,v) = span{v,av,...,a m 1 v}. Then range(v m ) = K m (A,v), V m e j = p j 1 (A)v, p j 1 P j 1. Approximate w := f(a)v by w m := V m f(h m )e 1. This is a polynomial approximant: V m f(h m )e 1 = p(a)v, p P m 1.
7 For any polynomial p P m 1, Therefore p(a)v = p(a)v m e 1 = V m p(h m )e 1. f(a)v V m f(h m )e 1 = (f p)(a)v V m (f p)(h m )e 1 (f p)(a) + (f p)(h m ). How can we bound the right-hand side?
8 Crouzeix 2006: There is a universal constant 2 C 11.5, such that for any A C n n and any function f analytic in the field of values W(A) = {y Ay : y C n, y = 1}, there holds f(a) C f L (W(A)). Corollary: Let f be analytic in W(A). Then f(a)v V m f(h m )e 1 23 min p P m 1 f p L (W(A)). Application of the Faber transform yields a sharper bound.
9 Error bounds via the Faber transform Let E be a convex compact set symmetric with respect to the real axis containing the field of values. Let E c = C\E. Example: When A R n n is symmetric, let E be the a real interval containing λ(a). Example: When A R n n is normal, let E be the convex hull of λ(a).
10 The Faber transform Φ maps the polynomial p(w) = a 0 w 0 + a 1 w a m w m, a j C, to the polynomial Φ(p)(z) = a 0 f 0 (z) + a 1 f 1 (z) a m f m (z), where f j is the Faber polynomial of degree j for E. Example: Let E = [ 1, 1]. The Faber polynomials are scaled Chebyshev polynomials of the first kind. Example: Let E = {z : z c r}. Then f m (z) = (z c) m /r m.
11 Let D closed unit disc in C and φ : E c D c unique conformal mapping with φ( ) =, φ ( ) > 0, ψ = φ 1. The Faber transform Φ is a bijection between functions F analytic in D and functions f analytic in E: z Int(E) : f(z) = Φ(F)(z) = 1 F(φ(ζ)) 2πi ζ z dζ, w Int(D) : F(w) = Φ 1 (f)(w) = 1 2πi E D f(ψ(u)) u w du.
12 Example: The Faber polynomials are given by f m (z) = Φ(P)(z) for P(w) = w m, m = 0, 1, 2,..., i.e., f m P m is the polynomial part of φ(z) m.
13 Theorem: f = Φ(F) = where 1 Φ 1 η m(f, D) η m (f, E) Φ η m (F, D), Φ 2 and η m (f, E) = η m (F, D) = min p P m f p L (E), min P P m F P L (D).
14 Theorem: Let E be convex and W(A) E. Define Φ + (F)(z) = Φ(F)(z) + F(0) for F analytic in D. Then Φ + 2. Moreover, for F analytic in D, Φ + (F)(A) 2 F L (D).
15 Proof: Use the representation Φ + (F)(A) = Φ(F)(A) + F(0)I = 1 π where K(w) := 1 2i D F(w)K(w) dw, ( ψ 1 dw (w)(ψ(w)i A) dw ψ (w)(ψ(w)i A 1 dw ) dw and A is the conjugate transpose of A. )
16 Corollary: Let E be convex and W(A) E. Let f A(E), F := Φ 1 (f), P P m, and Then p(z) := Φ + (P)(z) F(0). f(a) p(a) 2 F P L (D).
17 Theorem: Let W(A) E and let f = Φ(F) be analytic in E. Then f(a)v V m+1 f(h m+1 )e 1 4η m (F, D). The Arnoldi process gives accurate results if F can be approximated well by a polynomial of fairly low degree on D. Related results shown by Druskin, Knizhnerman, Hochbruck, Lubich,...
18 Rational Arnoldi Determine an orthonormal basis {v j } m+1 j=1 Krylov subspace of the rational (q(a)) 1 span{v,av,...,a m v} where q(z) := (z z 1 )(z z 2 ) (z z m ). Let z 0 C, z 0 z j, j 1, and let v 1 = v. For j = 1, 2,..., determine v j+1 by orthonormalizing (z j z 0 )(z j I A) 1 (A z 0 I)v j against available basis vectors v 1,v 2,...,v j.
19 This defines the coefficients h k,j in h j+1,j v j+1 = (z j z 0 )(z j I A) 1 (A z 0 I)v j +h 1,j v 1 + h 2,j v h j,j v j, j = 1, 2,.... In matrix notation, with z m+1 =, (A z 0 I)V m+1 (I+H m+1 D m+1 ) = V m+1 H m+1 +h m+2,m+1 v m+2 e T m+1, where D m+1 = diag[(z 1 z 0 ) 1, (z 2 z 0 ) 1,...,(z m+1 z 0 ) 1 ].
20 Projected matrix: A m+1 := Vm+1AV T m+1 = z 0 I + H m+1 (I + H m+1 D m+1 ) 1. Simplifies to the (standard) Arnoldi projection A m+1 = H m+1 used for polynomial approximation when z 0 = 0, z j =, j = 1, 2,....
21 Theorem: Let W(A) E and let f = Φ(F) be analytic in E. Let z 1,z 2,...,z m E and z m+1 =. Then where f(a)v V m+1 f(h m+1 )e 1 4η Q m(f, D), Q(w) = (w w 1 )(w w 2 ) (w w m ), w j = Φ 1 (z j ) and η Q m(f, D) = min P P m F P Q L (D)
22 Approximation of Markov functions Let dµ be a positive measure with support in [α,β], α < β <. Then f(z) = β α dµ(x) z x is a Markov function. Note: f analytic in C\[α,β]. Examples: log(1 + z) f(z) =, z f(z) = z γ, 0 < γ < 1, z C\R, are Markov functions.
23 Define the polynomial q(w) = m (w w j ), 1 < w j, j=1 with real or complex conjugate zeroes. Introduce the Blaschke product B(w) = wm q(1/w) q(w) = m j=1 1 w j w w w j.
24 Theorem: Let E be compact, convex, and symmetric with respect to the real axis. Let f be a Markov function with α < β < γ = min{re(z) : z E}. Then f = F 1 (f) is a Markov function, f(w) = β α φ (x)dµ(x) w φ(x) =: d µ(x) w x.
25 Theorem (cont d): Let R = P/ q with P P m 1 be a rational interpolant of f with prescribed poles w j at the reflected points 1/w j, j = 1, 2,...,m. Define r(w) = R(w) + B(w) Then r P m / q and ( f(1) R(1) 2B(1) η eq m(f 1 (f), D) f r L (D) f L (E) φ(β) + f( 1) R( 1) 2B( 1) max y φ([α,β]) ). 1 B(y). A bound for rational approximants of f on E is given by η q m(f, E) 2η eq m(f 1 (f), D).
26 Rational Lanczos with a fixed pole Inspired by Druskin and Knizhnerman (SIMAX, 98). Let A be symmetric and nonsingular. Determine orthonormal bases for the rational Krylov subspaces K l,m (A,v) := span{a l+1 v,...,a 1 v,v,av,...,a m 1 v}. We consider m = il for i = 1, 2, 3,...
27 A Lanczos-like method for orthogonalizing K 1,2 (A,v),K 2,3 (A,v),... for A SPD (i = 1). Let {v 0,v 1,v 1,v 2,...,v m 1,v m+1,v m } be an ON basis for K m,m+1 (A,v). Represent the v j in terms of monic orthogonal Laurent polynomials:
28 φ j (x) := x j + x j + j 1 k= j+1 j k=j+1 c j,k x k, j = 0, 1,...,m, c j,k x k, j = 1, 2,..., m + 1. with w j = φ j (A)v, v j = w j / w j.
29 Orthogonality with respect to the inner product (p,q) := (p(a)v) T (q(a)v) By symmetry of A: (xp,q) = (p,xq).
30 Theorem (Njåstad and Thron, 83): The orthogonal Laurent polynomials φ j satisfy short recursion relations. Survey: Jones and Njåstad, JCAM, 91 Recent work: Díaz-Mendoza, Gonzáles-Vera, Jiménez Paiz, and Njåstad. Simplest recursions when A SPD = trailing coefficient of every φ j is nonvanishing.
31 Algorithm: Compute ON basis {v k } m k= m+1 of Km,m+1 (A,v). δ 0 := v ; v 0 := v/δ 0 ; u := Av 0 ; α 0 := v T 0 u; u := u α 0v 0 ; δ 1 := u ; v 1 := u/δ 1 ; for k = 1,2,...,m 1 do w := A 1 v k ; β k+1 := v T k+1 w; w := w β k+1v k+1 ; β k := v T k w; w := w β kv k ; δ k := w ; v k := w/δ k ; u := Av k ; α k := v T k u; u := u α kv k ; α k := v T k u; u := u α kv k ; δ k+1 := u ; v k+1 := u/δ k+1 ; end
32 Let V 2m 1 = [v 0,v 1,v 1,v 2,...,v m 1,v m+1 ]. From the recursion formulas: H 2m 1 = V T 2m 1AV 2m 1, AV 2m 1 = V 2m H2m 1, G 2m = V T 2m A 1 V 2m, A 1 V 2m = V 2m+1 G2m. Odd numbered columns of H 2m 1 have at most 3 nontrivial elements and even numbered columns have at most 5.
33 Example: H 8 = * * * * * * * * * * * * * * * * * * * * * * * * * * * *.
34 Example: G 8 = * * * * * * * * * * * * * * * * * * * * * * * * * * * *. Both G 2m and H 2m are pentadiagonal.
35 Moreover, H 2m G 2m = I + e 2m u T 2m, where only the last two entries of u 2m may be nonvanishing.
36 A Lanczos-like method for orthogonalizing K 1,2 (A,v),K 2,3 (A,v),... for A indefinite (i = 1). The trailing coefficient of the φ j may vanish = new derivation of recursion formulas. There may be 5-term recursions.
37 Example: H 7 = * * * * * * * * * * * * * * * * * * * *
38 A Lanczos-like method for orthogonalizing K 1,2 (A,v),K 2,4 (A,v),... for A SPD. Orthogonal Laurent polynomials: φ j (x) := with x j + x j + j 1 k= (j 1)/2 2j k=j+1 c j,k x k, j = 0, 1,...,m, c j,k x k, j = 1, 2,..., m + 1. v j = φ j(a)v, j = 0, 1, 2, 1, 3, 4, 2, 5,.... φ j (A)v
39 Then {v 0,v 1,v 2,v 1,v 3,...,v m+1,v 2m 1 } an orthonormal basis for K m,2m (A,v). Example: The matrix H 10 is pentadiagonal:
40
41 A Lanczos-like method for orthogonalizing K 1,i (A,v),K 2,2i (A,v),... for A SPD, i 2. We want to determine orthonormal basis of v,av,a 2 v,...a i v,a 1 v,a i+1 v,...,a 2i v,a 2 v,a 2i+1 v,.... Associated orthogonal Laurent polynomials φ 0,φ 1,...φ i,φ 1,φ i+1,...,φ 2i,φ 2,...
42 of the form φ j (x) := x j + x j + j 1 k= (j 1)/i ij k=j+1 c j,k x k, j = 1, 2, 3,..., c j,k x k, j = 1, 2, 3,.... with φ 0 (x) := 1. Example: Let m = 2 and i = 3. Then H 8 of the form
43 x x x x x x x x x x x x x x x x x x x x x x x x x
44 Computed examples. A R , v R 1000 random. Tabulate error in approximations obtained by 42 steps of standard Lanczos or rational Lanczos.
45 Example 1. A = n 2 [ 1 2 1] tridiagonal, SPD; n = f(x) Lan. (42) Rat. Lan. (21,22) Rat. Lan. (14,29) exp(x) x exp( x) ln(x) exp(x)/x
46 Example 2. A = [a j,k ], a j,k = 1/(1 + j k ), Toeplitz, SPD, f(x) Lan. (42) Rat. Lan. (21,22) Rat. Lan. (14,29) exp(x) x exp( x) ln(x) exp(x)/x
47 Example 2 (cont d) f(x) = exp(x)/x. Approximation errors from top to bottom: Lanczos, rational Lanczos for i = 1, 2, f(x)=exp( x)/x; A is positive definite Toeplitz
48 Example 3. Matrix A R stems from the discretization of the differential operator L(u) = 1 10 u xx 100u yy on the unit square. f(x) Lan. (42) Rat. Lan. (21,22) Rat. Lan. (14,29) 1/ x
49 Example 3 (cont d) f(x) = exp(x)/x. Approximation errors from top to bottom: Lanczos, rational Lanczos for i = 1, 2, f(x)=exp( x)/x; A is generated from L(u)
50 Orthogonal rational functions with several fixed poles dµ: nonnegative measure on (part of) the real axis (f,g) = b a f(x)g(x)dµ : inner product P: space of all polynomials with real coefficients { Q = span } 1 : s N, α (x α k ) s k / [a,b] { } space of rational functions with real or complex conjugate poles α k. :
51 Assume poles ordered so that Im(α j ) > 0 Im(α j+1 ) = Im(α j ) Replace for all s = 1, 2,..., 1 (x α j ) s and 1 (x α j+1 ) s by where 1 (x 2 + p j x + q j ) s and x (x 2 + p j x + q j ) s, x 2 + p j x + q j = (x α j )(x α j+1 ), p j,q j R.
52 Define linear space P + Q = span{1, x s, 1 1 (x α k ) s, (x 2 + p j x + q j ) s, x (x 2 + p j x + q j ) s : s N, α k R\[a,b],α j C\R, α k, α j < }.
53 Let Ψ = {ψ 0,ψ 1,ψ 2,... } denote an elementary basis for P + Q: ψ 0 (x) = 1 and ψ l (x), l = 1, 2,..., one of the functions x s, 1 1 x (x α k ) s, (x 2 + p j x + q j ) s, (x 2 + p j x + q j ) s for some positive integers k, j, and s. Gram-Schmidt process applied to the basis Ψ yields basis of orthonormal rational functions Φ = {φ 0,φ 1,φ 2,... }.
54 The recursion relations for the φ j depend on the ordering of the basis functions ψ j of Ψ. We say the ordering of Ψ is natural if for all integers s 1, all real poles α k, and all pairs {p j,q j }, x s 1 x s, 1 (x α k ) 1 s (x α k ) s+1,
55 1 (x 2 + p j x + q j ) x s (x 2 + p j x + q j ) s 1 (x 2 + p j x + q j ) s+1, x (x 2 + p j x + q j ) 1 s (x 2 + p j x + q j ) s+1 x (x 2 + p j x + q j ) s+1.
56 Theorem: Let the basis Ψ = {ψ 0,ψ 1,ψ 2,... } be naturally ordered. Let every sequence of m 1 consecutive basis functions ψ k,ψ k+1,...,ψ k+m1 1 contain at least one power x l, there be at most m 2 basis functions between every pair of functions { } 1 x (x 2 + p j x + q j ) s,, s = 1, 2,... (x 2 + p j x + q j ) s
57 Then the orthonormal Laurent polynomials φ 0,φ 1,φ 2,... satisfy a (2m + 1)-term recurrence relation of the form xφ k (x) = m i= m c k,k+i φ k+i (x), k = 0, 1, 2,..., with m = max{m 1,m 2 + 1}. Here c k,k+i and φ k+i with k + i < 0 are zero.
58 Note: If Q, then we may order the basis Ψ to get the smallest possible value of m, which is 2. This gives a 5-term recursion formula. If Q =, then m = 1 and we obtain the 3-term recusion formula for orthogonal polynomials.
59 Theorem: Let the basis Ψ = {ψ 0,ψ 1,ψ 2,... } be naturally ordered. Let every sequence of m 1 consecutive basis functions ψ k,ψ k+1,...,ψ k+m1 1 contain at least one power (x α l ) t, there be at most m 2 basis functions between every pair of functions { } 1 x (x 2 + p j x + q j ) s,, s = 1, 2,... (x 2 + p j x + q j ) s
60 Then the orthonormal Laurent polynomials φ 0,φ 1,φ 2,... satisfy a (2m + 1)-term recurrence relation of the form 1 x α l φ k (x) = m i= m c (l) k,k+i φ k+i(x), k = 0, 1, 2,..., with m = max{m 1,m 2 + 1}. Here c k,k+i and φ k+i with k + i < 0 are zero.
61 Note: Let P+Q = span{1,x,...,x l,x 1,x l+1,...,x 2l,x 2,x 2l+1,... } This defines a basis Ψ with m 1 = l + 1 and m 2 = 0. Therefore, φ k (x) x satisfies a recursion formula with 2l + 3 terms. Note that satisfies a 5-term recursion. xφ k (x)
62 Short recursion formulas for 1 x 2 + p j x + q j φ k (x), x x 2 + p j x + q j φ k (x) also can be established.
63 Extensions (work in progress): Application to the evaluation of matrix functions f(a)b for nonsymmetric matrices. New derivation of rational Gauss quadrature rules.
64 Muchas Gracias
RECURSION RELATIONS FOR THE EXTENDED KRYLOV SUBSPACE METHOD
RECURSION RELATIONS FOR THE EXTENDED KRYLOV SUBSPACE METHOD CARL JAGELS AND LOTHAR REICHEL Abstract. The evaluation of matrix functions of the form f(a)v, where A is a large sparse or structured symmetric
More informationApproximation of functions of large matrices. Part I. Computational aspects. V. Simoncini
Approximation of functions of large matrices. Part I. Computational aspects V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem Given
More information1. Introduction. Many applications in science and engineering require the evaluation of expressions of the form (1.1)
ERROR ESTIMATION AND EVALUATION OF MATRIX FUNCTIONS VIA THE FABER TRANSFORM BERNHARD BECKERMANN AND LOTHAR REICHEL Abstract. The need to evaluate expressions of the form fa or fab, where f is a nonlinear
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 3, pp. 765 781 c 2005 Society for Industrial and Applied Mathematics QUADRATURE RULES BASED ON THE ARNOLDI PROCESS DANIELA CALVETTI, SUN-MI KIM, AND LOTHAR REICHEL
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationComputation of Rational Szegő-Lobatto Quadrature Formulas
Computation of Rational Szegő-Lobatto Quadrature Formulas Joint work with: A. Bultheel (Belgium) E. Hendriksen (The Netherlands) O. Njastad (Norway) P. GONZÁLEZ-VERA Departament of Mathematical Analysis.
More informationC M. A two-sided short-recurrence extended Krylov subspace method for nonsymmetric matrices and its relation to rational moment matching
M A C M Bergische Universität Wuppertal Fachbereich Mathematik und Naturwissenschaften Institute of Mathematical Modelling, Analysis and Computational Mathematics (IMACM) Preprint BUW-IMACM 16/08 Marcel
More informationApproximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method
Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,
More informationSimplified Anti-Gauss Quadrature Rules with Applications in Linear Algebra
Noname manuscript No. (will be inserted by the editor) Simplified Anti-Gauss Quadrature Rules with Applications in Linear Algebra Hessah Alqahtani Lothar Reichel the date of receipt and acceptance should
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationA short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering
A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationMatrix Functions and their Approximation by. Polynomial methods
[ 1 / 48 ] University of Cyprus Matrix Functions and their Approximation by Polynomial Methods Stefan Güttel stefan@guettel.com Nicosia, 7th April 2006 Matrix functions Polynomial methods [ 2 / 48 ] University
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationSzegő-Lobatto quadrature rules
Szegő-Lobatto quadrature rules Carl Jagels a,, Lothar Reichel b,1, a Department of Mathematics and Computer Science, Hanover College, Hanover, IN 47243, USA b Department of Mathematical Sciences, Kent
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More information1. Introduction. Let µ(t) be a distribution function with infinitely many points of increase in the interval [ π, π] and let
SENSITIVITY ANALYSIS OR SZEGŐ POLYNOMIALS SUN-MI KIM AND LOTHAR REICHEL Abstract Szegő polynomials are orthogonal with respect to an inner product on the unit circle Numerical methods for weighted least-squares
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationAn Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials
[ 1 ] University of Cyprus An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials Nikos Stylianopoulos, University of Cyprus New Perspectives in Univariate and Multivariate
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationKey words. cyclic subspaces, Krylov subspaces, orthogonal bases, orthogonalization, short recurrences, normal matrices.
THE FABER-MANTEUFFEL THEOREM FOR LINEAR OPERATORS V FABER, J LIESEN, AND P TICHÝ Abstract A short recurrence for orthogonalizing Krylov subspace bases for a matrix A exists if and only if the adjoint of
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationAN ITERATIVE METHOD WITH ERROR ESTIMATORS
AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis Volume 4, pp 64-74, June 1996 Copyright 1996, ISSN 1068-9613 ETNA A NOTE ON NEWBERY S ALGORITHM FOR DISCRETE LEAST-SQUARES APPROXIMATION BY TRIGONOMETRIC POLYNOMIALS
More informationAPPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions
301 APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION Let V be a finite dimensional inner product space spanned by basis vector functions {w 1, w 2,, w n }. According to the Gram-Schmidt Process an
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationLecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University
Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of
More informationOrthogonal Polynomials and Gaussian Quadrature
Orthogonal Polynomials and Gaussian Quadrature 1. Orthogonal polynomials Given a bounded, nonnegative, nondecreasing function w(x) on an interval, I of the real line, we consider the Hilbert space L 2
More informationSensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data
Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationAugmented GMRES-type methods
Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/
More informationRational Krylov approximation of matrix functions: Numerical methods and optimal pole selection
GAMM-Mitteilungen, 28/3/2012 Rational Krylov approximation of matrix functions: Numerical methods and optimal pole selection Stefan Güttel 1, 1 University of Oxford, Mathematical Institute, 24 29 St Giles,
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More information1 Extrapolation: A Hint of Things to Come
Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 41, pp. 159-166, 2014. Copyright 2014,. ISSN 1068-9613. ETNA MAX-MIN AND MIN-MAX APPROXIMATION PROBLEMS FOR NORMAL MATRICES REVISITED JÖRG LIESEN AND
More informationCONVERGENCE ANALYSIS OF THE EXTENDED KRYLOV SUBSPACE METHOD FOR THE LYAPUNOV EQUATION
CONVERGENCE ANALYSIS OF THE EXTENDED KRYLOV SUBSPACE METHOD FOR THE LYAPUNOV EQUATION L. KNIZHNERMAN AND V. SIMONCINI Abstract. The Extended Krylov Subspace Method has recently arisen as a competitive
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 35, pp. 8-28, 29. Copyright 29,. ISSN 68-963. MONOTONE CONVERGENCE OF THE LANCZOS APPROXIMATIONS TO MATRIX FUNCTIONS OF HERMITIAN MATRICES ANDREAS
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationKrylov Space Solvers
Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse
More informationz k, z C, (1.1) , for k 2. j! j=0
ON THE CONVERGENCE OF KRYLOV SUBSPACE METHODS FOR MATRIX MITTAG-LEFFLER FUNCTIONS IGOR MORET AND PAOLO NOVATI Abstract. In this paper we analyze the convergence of some commonly used Krylov subspace methods
More informationOn the Ritz values of normal matrices
On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationarxiv: v1 [math.na] 2 Feb 2017
On a fast Arnoldi method for BML-matrices Bernhard Beckermann, Clara Mertens, and Raf Vandebril arxiv:170200671v1 [mathna] 2 Feb 2017 Abstract Matrices whose adjoint is a low rank perturbation of a rational
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationOn the solution of large Sylvester-observer equations
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 200; 8: 6 [Version: 2000/03/22 v.0] On the solution of large Sylvester-observer equations D. Calvetti, B. Lewis 2, and L. Reichel
More informationLinear Algebra. Brigitte Bidégaray-Fesquet. MSIAM, September Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble.
Brigitte Bidégaray-Fesquet Univ. Grenoble Alpes, Laboratoire Jean Kuntzmann, Grenoble MSIAM, 23 24 September 215 Overview 1 Elementary operations Gram Schmidt orthonormalization Matrix norm Conditioning
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationPDEs, Matrix Functions and Krylov Subspace Methods
PDEs, Matrix Functions and Krylov Subspace Methods Oliver Ernst Institut für Numerische Mathematik und Optimierung TU Bergakademie Freiberg, Germany LMS Durham Symposium Computational Linear Algebra for
More informationTikhonov Regularization of Large Symmetric Problems
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationApplied Linear Algebra
Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationIn honour of Professor William Gear
Functional Calculus and Numerical Analysis Michel Crouzeix Université de Rennes 1 ICNAAM 2011 In honour of Professor William Gear Halkidiki, September 2011 The context Let us consider a closed linear operator
More informationOn the reduction of matrix polynomials to Hessenberg form
Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu
More informationRational Krylov Decompositions: Theory and Applications. Berljafa, Mario. MIMS EPrint:
Rational Krylov Decompositions: Theory and Applications Berljafa, Mario 2017 MIMS EPrint: 2017.6 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationBounds for the Matrix Condition Number
Bounds for the Matrix Condition Number CIME-EMS School, Exploiting Hidden Structure in Matrix Computations. Algorithms and Applications Cetraro 22-26 June 2015 Sarah W. Gaaf Joint work with Michiel Hochstenbach
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationMatrix methods for quadrature formulas on the unit circle. A survey
Matrix methods for quadrature formulas on the unit circle A survey Adhemar Bultheel a, María José Cantero b,1,, Ruymán Cruz-Barroso c,2 a Department of Computer Science, KU Leuven, Belgium b Department
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationTHEOREMS, ETC., FOR MATH 515
THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every
More informationGeneralized rational Krylov decompositions with an application to rational approximation. Berljafa, Mario and Güttel, Stefan. MIMS EPrint: 2014.
Generalized rational Krylov decompositions with an application to rational approximation Berljafa, Mario and Güttel, Stefan 2014 MIMS EPrint: 2014.59 Manchester Institute for Mathematical Sciences School
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationMatrix functions and their approximation. Krylov subspaces
[ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationCHAPTER VIII HILBERT SPACES
CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)
More informationSolutions Preliminary Examination in Numerical Analysis January, 2017
Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationMatrix square root and interpolation spaces
Matrix square root and interpolation spaces Mario Arioli and Daniel Loghin m.arioli@rl.ac.uk STFC-Rutherford Appleton Laboratory, and University of Birmingham Sparse Days,CERFACS, Toulouse, 2008 p.1/30
More informationAnalysis Comprehensive Exam, January 2011 Instructions: Do as many problems as you can. You should attempt to answer completely some questions in both
Analysis Comprehensive Exam, January 2011 Instructions: Do as many problems as you can. You should attempt to answer completely some questions in both real and complex analysis. You have 3 hours. Real
More informationWe G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies
We G15 5 Moel Reuction Approaches for Solution of Wave Equations for Multiple Frequencies M.Y. Zaslavsky (Schlumberger-Doll Research Center), R.F. Remis* (Delft University) & V.L. Druskin (Schlumberger-Doll
More informationITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS
ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID
More informationmax λ Λ(A) p(λ) b Ax 0 2
ON THE CONVERGENCE OF THE ARNOLDI PROCESS FOR EIGENVALUE PROBLEMS M BELLALIJ, Y SAAD, AND H SADOK Abstract This paper takes another look at the convergence of the Arnoldi procedure for solving nonsymmetric
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationKrylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms
Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 4. Monotonicity of the Lanczos Method Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische
More informationSome minimization problems
Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationIDR(s) as a projection method
Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute
More informationMultivariable Calculus
2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More information