Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Size: px
Start display at page:

Download "Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms"

Transcription

1 Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 4. Monotonicity of the Lanczos Method Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische Universität Bergakademie Freiberg, Germany Wintersemester 2/ Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 / 24

2 Outline An observation 2 A first result 3 Strict monotonicity 4 M-matrices 5 Special functions, Stieltjes functions 6 The main theorem Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 2 / 24

3 An observation We solve our model problem (the -D heat equation) whose semi-discrete version reads as u (t) = Au(t), t >, u() = b given, where A = h 2 tridiag(, 2, ) R n n, h = /(n + ). Its solution is u(t) = exp(ta)b. First step: Hermitian Lanczos process Given A C n n Hermitian, b C n, f such that f (A) is defined. w = b, v = For m =, 2,... β m = w ( := 2 ) v m = w/β m w = Av m β mv m α m = v H mw w = w α mv m End Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 3 / 24

4 The columns of V m = [v v 2 v m ] are an ON basis of K m (A, b) and the tridiagonal matrix 2 3 α β 2 β 2 α 2 β 3 β 3 α 3 T m =. R m m α m β m β m α m represents the compression of A onto K m (A, b), i.e., T m = V H m AV m. Note that T m is real, α m = v H m(av m β m v m ) = v H mav m [λ min (A), λ max (A)] because A is Hermitian. Second step: Lanczos approximation to f (A)b, f m = β V m exp(t m )e = V m exp(v H m AV m )V H m b. We use expm (scaling and squaring) to calculate exp(t m ). Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 4 / 24

5 The model problem exp(a)b f m exp(a)b f m n = 99, b = rand(n, ): We observe monotone convergence. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 5 / 24

6 A first result Theorem [Druskin (28)]. Let A C n n be Hermitian and b C n. For the Lanczos approximants f m, m =, 2,..., L, to f = exp(a)b, there holds f f 2 f L = f, f f f f 2 f f L =. Proof. First assume that A is positive definite. Then T m O (entrywise). This implies O T m := T m. T m. β m β m α m = T m. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 6 / 24

7 Thus O T k m = [ T k m ] Tm k for all k =,, 2,... Since exp(t ) = I + T + 2 T k! T k +, I m exp( T m ) = [ exp(tm ) ] exp(t m ). In particular e exp( T m )e = [ exp(tm )e ] exp(t m )e. Finally, exp(t m )e exp(t m )e and, since V m has orthonormal columns and β >, f m = β V m exp(t m )e = β exp(t m )e β exp(t m )e = β V m exp(t m )e = f m. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 7 / 24

8 The monotonicity of the errors follows immediately: We have [ ] [ ] exp(tm )e exp(tm )e exp(t L )e, which implies exp(t L )e exp(t L )e [ exp(tm )e ] [ exp(tm )e exp(t L )e ] and thus exp(t L)e [ exp(tm )e ] exp(t L)e [ exp(tm )e ]. The assertion now follows from the observation f f m = β V L exp(t L )e V m f (T m )e [ f (Tm )e = β V L exp(t L )e V L ] [ f (Tm )e = β exp(t L )e ]. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 8 / 24

9 If A is an arbitrary Hermitian matrix we choose a shift µ such that B = A + µi is positive definite. The Arnoldi approximations f (B) m to are given by f (B) m = β V (B) m exp ( T (B) m (easy exercise). This shows f (A) m exp(b)b = exp(µ) exp(a)b ) ( ) e = β V (A) m exp T (A) m + µi e = exp(µ)f (A) m = exp( µ)f (B) m, exp(a)b f (A) m = exp( µ) ( exp(b)b f (B) m ) which proves the theorem. Note we showed more than we claimed: We have not only normwise but componentwise (with respect to the basis V L ) monotonicity. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 9 / 24

10 Strict monotonicity The monotoncity results described in the previous theorem can be sharpened: (exercise!). < f < f 2 < < f L = f, f f > f f 2 > > f f L = Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 / 24

11 M-matrices T = [t i, j ] R m m is a (nonsingular) M-matrix (Hermann Minkowski) if t i, j for all i j, T exists and T O. We need the following properties of M-matrices. Let A R n n have nonpositive off-diagonal entries. Then A is an M-matrix all eigenvalues of A have positive real parts. (M ) If A, B R n n are two M-matrices, then A B O B A. (M 2 ) For A, E R n n, let A be an M-matrix and let A + E have nonpositive off-diagonal entries, then E O A + E is an M-matrix. (M 3 ) Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 / 24

12 Special functions We consider functions f : (, ) R which can be represented as f (z) = dµ(t), z >. (t + z) k Here k N and µ is a nonnegative measure for which t k dµ(t) is finite. Example. Let δ x denote the Dirac measure (i.e., δ x (M) = if x M and δ x (M) = otherwise). Then j= z k = (t + z) k dδ (t). More generally, for x j >, π j >, (j =, 2,..., m), m π j (z + x j ) k = m (t + z) k d π j δ xj (t). j= Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 2 / 24

13 Stieltjes integrals and Stieltjes transformation Let [α, β] be a real finite closed interval and ψ : [α, β] R. Let : α = τ < τ < < τ m = β a subdivision of [α, β] with norm := max j m (τ j τ j ). A set of pivotal elements, Θ : τ < τ 2 < < τ m, consistent with consists of numbers τ j with τ j τ j τ j (j =, 2,..., m). For any (complex valued) function f defined on [α, β], set m S(, Θ) := f (τ j )(ψ(τ j) ψ(τ j )). j= If there is a complex number S such that, given any ε >, a number δ = ε(δ) exists such that S(, Θ) S ε for all subdivisions with δ and all consistent Θ, then S = β α f (t) dψ(t) is called the Stieltjes integral of f with respect to ψ on [α, β]. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 3 / 24

14 If ψ(t) = t + γ for some constant γ the Stieltjes integral is the Riemann integral. If ψ is continuously differentiable on [α, β], then β α f (t) dψ(t) = β α f (t)ψ (t) dt. If ψ is a step function with finitely many jumps at ζ, ζ 2,..., ζ m, i.e.,, α t ζ, ψ(t) = k j= π j, ζ k < t ζ k+, m j= π j, ζ m < t β, then β α f (t) dψ(t) = m π j f (ζ j ). j= Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 4 / 24

15 If f is continuous and ψ is nondecreasing on [α, β] then β α f (t) dψ(t) exists. If f is continuous and ψ is nondecreasing on [α, ] we set α f (t) dψ(t) = lim β β α f (t) dψ(t) provided the limit exists. If f is continuous and bounded on [α, ] and if ψ is nondecreasing and bounded on [α, ] then α f (t) dψ(t) exists. Let ψ : [, ) R be nondecreasing and bounded. We call ζ > a point of increase of ψ if ψ is not constant on any interval [ζ ε, ζ + ε], ε >. Case. ψ has finitely many points of increase. Then ψ is a step function with finitely many jumps ζ j, j =, 2,..., m (namely at the points of increase). There holds m z + t dψ(t) = j= ψ(ζ j +) ψ(ζ j ) z + ζ j =: r(z) r a rational function with simple poles on the negative real axis and positive residues. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 5 / 24

16 Moreover, (i) r is analytic in, (ii) r(x) for x, (iii) r(u) L and r(l) U, where U and L denote the upper and lower half-plane, respectively. Functions satisfying (i) (iii) are called positive symmetric rational functions. Every symmetric rational functions r of type (m, m) and (m, m) can be written as r(z) = α + z + t dψ(t) with α and a nondecreasing function ψ : [, ) R which has finitely many points of increase. Case 2. ψ has infinitely many points of increase. Then f (z) = z + t dψ(t) exists for all z C \ (, ) and is an analytic function there. f is called the Stieltjes transform of ψ. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 6 / 24

17 f (z) = log( + z ) when ψ(t) = t if t and ψ(t) = for t. f (z) = arctan(/ z)/ z when ψ(t) = t if t and ψ(t) = for t. f (z) = z α, α (, ) when ψ(t) = sin(( α)π) π t α. f (z) = z α ( + z) β, < α, α + β <. If ψ is the distribution function of the measure µ, i.e., ψ(x) = µ([, x]) = x dµ(t), and if w(t) is the associated density function, then (under suitable conditions) z + t dµ(t) = z + t dψ(t) = w(t) z + t dt. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 7 / 24

18 The main theorem Theorem [Frommer]. Let A C n n be Hermitian positive definite and b C n. Assume that the function f : (, ) R can be written as f (z) = dµ(t), z >, (t + z) k with a nonnegative measure µ and k N. For the Lanczos approximants f m to f (A)b and the resulting errors d m = f (A)b f m, there holds: { f m } m L is monotonically increasing. { d m } m L is monotonically decreasing. Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 8 / 24

19 Proof. Step. For the matrix S m = diag(,,..., ( ) m ) R m m, there holds: S T m = S m and S 2 m = I m, i.e., S m = S T m = S m, The columns of V m S m = [v v 2 ( ) m+ v m ] =: V ± m form an ON basis of K m (A, b). T ± m := S m T m S m = α β 2 β 2 α 2 β 3 β 3 α 3... α m β m β m has nonpositive off-diagonal entries. If A and therefore T m as well as T ± m are positive definite, then T ± m and T ± m + ti m, t, are M-matrices ((M ) and (M 3 )). Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 9 / 24 α m

20 Step 2. We can write the Lanczos approximants f m in the form f m = β V m f (T m )e = β V m S m f (S m T m S m )S m e = β V ± m f (T ± m )e. Consequently, f m = y m, where y m := β f (T ± m )e. For the special functions f which we consider here, there holds y m = β (ti m + T m ± ) k e dµ(t) Step 3. We define T ± m := [ T ± m α m ]. Then ti m + T ± m ti m + T ± m for all t. By (M 3 ) ti m + T ± m is an M-matrix for all t and O (ti m + T ± m ) (ti m + T ± m ) for all T by (M 2 ). Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 2 / 24

21 Thus, for every t and as well as O But this is just O (ti m + T ± m ) k (ti m + T ± m ) k (ti m + T ± m ) k dµ(t) (ti m + T ± m ) k e dµ(t) [ y m ] y m which is equivalent to f m f m. (ti m + T ± m ) k dµ(t) (ti m + T ± m ) k e dµ(t). Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 2 / 24

22 Step 4. The monotonicity of the errors d m = f (A)b β V m f (T m )e = V L y L V m y m follows from d m = V L (y L [ y m ]). Remark. For the Dirac measure µ = δ there holds δ (M) = if M and δ (M) = if M, (t + z) k dµ(t) = z k. For k = this means that the errors of the CG method decrease monotonically wrt 2 (for a different proof, see [Steihaug (983)]). Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 22 / 24

23 An extension. We can apply the monotonicity results to functions of the form g(z) = f (z)p(z), where f is as above and p is a polynomial (of low degree). We write g(a)b = f (A) b with b = p(a)b and apply the Lanczos method in the Krylov spaces K m (A, b). E.g., sign(a)b = (A 2 ) /2 Ab, which suggests to approximate B /2 b with B = A 2 (Hermitian positive definite if A is Hermitian) and b = Ab, i.e., we work in K m (A 2, Ab). Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 23 / 24

24 Hints to the literature C. Berg. Quelques remarques sur le cône de Stieltjes. Lecture Notes in Mathematics 84, Springer, Berlin, Heidelberg 984. A. Berman and R. J. Plemmons. Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York 979. Updated edition, Classics in Applied Mathematics Vol. 9, SIAM, Philadelphia 994. V. Druskin. On monotonicity of the Lanczos approximation to the matrix exponential. Linear Algebra Appl. 429, (28). A. Frommer. Monotone convergence of the Lanczos approximations to matrix functions of Hermitian matrices. Electron. Trans. Numer. Anal. 35, 8 28 (29). T. Fujimoto and R. R. Ranade. Two characterizations of inverse-positive matrices: the Hawkins-Simon condition and the Le Chatelier-Braun principle. Electron. J. Linear Algebra, (24). P. Henrici. Applied and Computational Complex Analysis. Vol. 2: Special Functions Integral Transforms Asymptotics Continued Fractions. Jon Wiley & Sons, New York 977. T. Steihaug. The conjugate gradient method and trust regions in large scale optimization. SIAM J. Numer. Anal. 2, (983). Michael Eiermann (TU Freiberg) Matrix Functions WS 2/2 24 / 24

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 35, pp. 8-28, 29. Copyright 29,. ISSN 68-963. MONOTONE CONVERGENCE OF THE LANCZOS APPROXIMATIONS TO MATRIX FUNCTIONS OF HERMITIAN MATRICES ANDREAS

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

PDEs, Matrix Functions and Krylov Subspace Methods

PDEs, Matrix Functions and Krylov Subspace Methods PDEs, Matrix Functions and Krylov Subspace Methods Oliver Ernst Institut für Numerische Mathematik und Optimierung TU Bergakademie Freiberg, Germany LMS Durham Symposium Computational Linear Algebra for

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

arxiv: v1 [hep-lat] 2 May 2012

arxiv: v1 [hep-lat] 2 May 2012 A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Matrix Functions and their Approximation by. Polynomial methods

Matrix Functions and their Approximation by. Polynomial methods [ 1 / 48 ] University of Cyprus Matrix Functions and their Approximation by Polynomial Methods Stefan Güttel stefan@guettel.com Nicosia, 7th April 2006 Matrix functions Polynomial methods [ 2 / 48 ] University

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS

EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS EQUIVALENCE OF CONDITIONS FOR CONVERGENCE OF ITERATIVE METHODS FOR SINGULAR LINEAR SYSTEMS DANIEL B. SZYLD Department of Mathematics Temple University Philadelphia, Pennsylvania 19122-2585 USA (szyld@euclid.math.temple.edu)

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

On the Vorobyev method of moments

On the Vorobyev method of moments On the Vorobyev method of moments Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences http://www.karlin.mff.cuni.cz/ strakos Conference in honor of Volker Mehrmann Berlin, May 2015

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Matrix functions and their approximation. Krylov subspaces

Matrix functions and their approximation. Krylov subspaces [ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

C M. A two-sided short-recurrence extended Krylov subspace method for nonsymmetric matrices and its relation to rational moment matching

C M. A two-sided short-recurrence extended Krylov subspace method for nonsymmetric matrices and its relation to rational moment matching M A C M Bergische Universität Wuppertal Fachbereich Mathematik und Naturwissenschaften Institute of Mathematical Modelling, Analysis and Computational Mathematics (IMACM) Preprint BUW-IMACM 16/08 Marcel

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Positive entries of stable matrices

Positive entries of stable matrices Positive entries of stable matrices Shmuel Friedland Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago Chicago, Illinois 60607-7045, USA Daniel Hershkowitz,

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

On the Ritz values of normal matrices

On the Ritz values of normal matrices On the Ritz values of normal matrices Zvonimir Bujanović Faculty of Science Department of Mathematics University of Zagreb June 13, 2011 ApplMath11 7th Conference on Applied Mathematics and Scientific

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

Solving large sparse eigenvalue problems

Solving large sparse eigenvalue problems Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Efficient Wavefield Simulators Based on Krylov Model-Order Reduction Techniques

Efficient Wavefield Simulators Based on Krylov Model-Order Reduction Techniques 1 Efficient Wavefield Simulators Based on Krylov Model-Order Reduction Techniques From Resonators to Open Domains Rob Remis Delft University of Technology November 3, 2017 ICERM Brown University 2 Thanks

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

DEFLATED RESTARTING FOR MATRIX FUNCTIONS

DEFLATED RESTARTING FOR MATRIX FUNCTIONS DEFLATED RESTARTING FOR MATRIX FUNCTIONS M. EIERMANN, O. G. ERNST AND S. GÜTTEL Abstract. We investigate an acceleration technique for restarted Krylov subspace methods for computing the action of a function

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Computing the Action of the Matrix Exponential

Computing the Action of the Matrix Exponential Computing the Action of the Matrix Exponential Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Awad H. Al-Mohy 16th ILAS

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium MATRIX RATIONAL INTERPOLATION WITH POLES AS INTERPOLATION POINTS M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium B. BECKERMANN Institut für Angewandte

More information

On Algebraic and Semialgebraic Groups and Semigroups

On Algebraic and Semialgebraic Groups and Semigroups Seminar Sophus Lie 3 (1993) 221 227 Version of July 15, 1995 On Algebraic and Semialgebraic Groups and Semigroups Helmut Boseck 0. This is a semi lecture 1 whose background is as follows: The Lie-theory

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System

The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System Numerische Mathematik manuscript No. (will be inserted by the editor) The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System Ren-Cang Li 1, Wei Zhang 1 Department of Mathematics, University

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

On the Perturbation of the Q-factor of the QR Factorization

On the Perturbation of the Q-factor of the QR Factorization NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. ; :1 6 [Version: /9/18 v1.] On the Perturbation of the Q-factor of the QR Factorization X.-W. Chang McGill University, School of Comptuer

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

Quantum Computing Lecture 3. Principles of Quantum Mechanics. Anuj Dawar

Quantum Computing Lecture 3. Principles of Quantum Mechanics. Anuj Dawar Quantum Computing Lecture 3 Principles of Quantum Mechanics Anuj Dawar What is Quantum Mechanics? Quantum Mechanics is a framework for the development of physical theories. It is not itself a physical

More information

Orthogonal Symmetric Toeplitz Matrices

Orthogonal Symmetric Toeplitz Matrices Orthogonal Symmetric Toeplitz Matrices Albrecht Böttcher In Memory of Georgii Litvinchuk (1931-2006 Abstract We show that the number of orthogonal and symmetric Toeplitz matrices of a given order is finite

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Chap 4. State-Space Solutions and

Chap 4. State-Space Solutions and Chap 4. State-Space Solutions and Realizations Outlines 1. Introduction 2. Solution of LTI State Equation 3. Equivalent State Equations 4. Realizations 5. Solution of Linear Time-Varying (LTV) Equations

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Exercises * on Linear Algebra

Exercises * on Linear Algebra Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

z, w = z 1 w 1 + z 2 w 2 z, w 2 z 2 w 2. d([z], [w]) = 2 φ : P(C 2 ) \ [1 : 0] C ; [z 1 : z 2 ] z 1 z 2 ψ : P(C 2 ) \ [0 : 1] C ; [z 1 : z 2 ] z 2 z 1

z, w = z 1 w 1 + z 2 w 2 z, w 2 z 2 w 2. d([z], [w]) = 2 φ : P(C 2 ) \ [1 : 0] C ; [z 1 : z 2 ] z 1 z 2 ψ : P(C 2 ) \ [0 : 1] C ; [z 1 : z 2 ] z 2 z 1 3 3 THE RIEMANN SPHERE 31 Models for the Riemann Sphere One dimensional projective complex space P(C ) is the set of all one-dimensional subspaces of C If z = (z 1, z ) C \ 0 then we will denote by [z]

More information

Qualifying Examination HARVARD UNIVERSITY Department of Mathematics Tuesday, January 19, 2016 (Day 1)

Qualifying Examination HARVARD UNIVERSITY Department of Mathematics Tuesday, January 19, 2016 (Day 1) Qualifying Examination HARVARD UNIVERSITY Department of Mathematics Tuesday, January 19, 2016 (Day 1) PROBLEM 1 (DG) Let S denote the surface in R 3 where the coordinates (x, y, z) obey x 2 + y 2 = 1 +

More information

Stability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay

Stability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay 1 Stability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay Wassim M. Haddad and VijaySekhar Chellaboina School of Aerospace Engineering, Georgia Institute of Technology, Atlanta,

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

APPROXIMATION OF THE LINEAR THE BLOCK SHIFT-AND-INVERT KRYLOV SUBSPACE METHOD

APPROXIMATION OF THE LINEAR THE BLOCK SHIFT-AND-INVERT KRYLOV SUBSPACE METHOD Journal of Applied Analysis and Computation Volume 7, Number 4, November 2017, 1402 1416 Website:http://jaac-online.com/ DOI:10.11948/2017085 APPROXIMATION OF THE LINEAR COMBINATION OF ϕ-functions USING

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS ERNST HAIRER AND PIERRE LEONE Abstract. We prove that to every rational function R(z) satisfying R( z)r(z) = 1, there exists a symplectic Runge-Kutta method

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

A Divide-and-Conquer Method for the Takagi Factorization

A Divide-and-Conquer Method for the Takagi Factorization A Divide-and-Conquer Method for the Takagi Factorization Wei Xu 1 and Sanzheng Qiao 1, Department of Computing and Software, McMaster University Hamilton, Ont, L8S 4K1, Canada. 1 xuw5@mcmaster.ca qiao@mcmaster.ca

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

J-SPECTRAL FACTORIZATION

J-SPECTRAL FACTORIZATION J-SPECTRAL FACTORIZATION of Regular Para-Hermitian Transfer Matrices Qing-Chang Zhong zhongqc@ieee.org School of Electronics University of Glamorgan United Kingdom Outline Notations and definitions Regular

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (009) 11:67 93 DOI 10.1007/s0011-008-006- Numerische Mathematik The rate of convergence of GMRES on a tridiagonal Toeplitz linear system Ren-Cang Li Wei Zhang Received: 1 November 007 / Revised:

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Chapter 12 Solving secular equations

Chapter 12 Solving secular equations Chapter 12 Solving secular equations Gérard MEURANT January-February, 2012 1 Examples of Secular Equations 2 Secular equation solvers 3 Numerical experiments Examples of secular equations What are secular

More information

Complex Analysis Topic: Singularities

Complex Analysis Topic: Singularities Complex Analysis Topic: Singularities MA201 Mathematics III Department of Mathematics IIT Guwahati August 2015 Complex Analysis Topic: Singularities 1 / 15 Zeroes of Analytic Functions A point z 0 C is

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

Quantum Physics II (8.05) Fall 2002 Assignment 3

Quantum Physics II (8.05) Fall 2002 Assignment 3 Quantum Physics II (8.05) Fall 00 Assignment Readings The readings below will take you through the material for Problem Sets and 4. Cohen-Tannoudji Ch. II, III. Shankar Ch. 1 continues to be helpful. Sakurai

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Exponential of a nonnormal matrix

Exponential of a nonnormal matrix Exponential of a nonnormal matrix Mario Berljafa Stefan Güttel February 2015 Contents 1 Introduction 1 2 Running RKFIT with real data 1 3 Evaluating the rational approximant 3 4 Some other choices for

More information