Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms
|
|
- Merryl Dixon
- 6 years ago
- Views:
Transcription
1 Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische Universität Bergakademie Freiberg, Germany Wintersemester 2010/11 Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 1 / 20
2 Outline 1 Further applications 2 Properties of matrix functions 3 Computational methods Scaling and squaring The Schur-Parlett algorithm An aside: Functions of 2-by-2 triangular matrices Newton s method Trapezoidal rule + conformal maps Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 2 / 20
3 Further applications We saw that exp(a) is important for solving differential equations. But there are other functions involved, e.g., y (t) = Ay(t), y(0) = b, y (0) = c, is solved by y(t) = cos(t A)b + ( A) 1 sin(t A)c. But for now we will stay with exp(a): Numerical simulation of transient electromagnetic (TEM) geophysical explorations. Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 3 / 20
4 Time-evolution of electrical field E(x, t) can be modelled via σe t + (µ 1 E) = 0 on Ω R 3, E(x, t = 0) = E (0) (x) + boundary conditions. Discretization in space leads to homogeneous linear initial value problem with time-independent coefficients. Solution has the form 2000 u(t) = exp( ta)u 0, where A discretizes σ 1 (µ ) and is large 2000 and sparse z x y Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 4 / 20
5 Properties of matrix functions Let A C n n and f, f j, g functions defined on Λ(A), then (f + g)(a) = f (A) + g(a), (fg)(a) = f (A)g(A). f (A T ) = f (A) T, but f (A H ) = f (A) H is false in general. If B commutes with A then B commutes with f (A). Let P(z 1,..., z l ) be a polynomial in l variables. If f (z) := P(f 1 (z),..., f l (z)) = 0 on Λ(A), i.e., f (ν) (λ µ ) = 0 for µ = 1,..., k and ν = 0,..., n µ 1, then f (A) = P(f 1 (A),..., f l (A)) = O. Examples: sin 2 (A) + cos 2 (A) = I, exp( A) = exp(a) 1. f (A I) = f (A) I, f (I A) = I f (A). If AB = BA then exp(a + B) = exp(a) exp(b). exp(a I + I B) = exp(a) exp(b). Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 5 / 20
6 Computational methods As the applications discussed so far show, it is very often not f (A) but f (A)b which needs to be computed. We are interested in problems, where the action of a matrix function on a vector is required and where the matrix is large and sparse or structured. Here, evaluating f (A)b by first computing (the usually dense matrix) f (A) is unfeasible. But since we are discussing projection methods we also need to compute f (A)b for small or medium-sized matrices A. This is why we also discuss methods that compute f (A). Many methods rely on the following observation: Let {s m } be a sequence of "simple" analytic functions (simple means that s m (A) can be computed without major difficulties) which converges uniformly on a compact set Ω to f. If Λ(A) Ω then lim m s m (A) = f (A). Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 6 / 20
7 Scaling and squaring The MATLAB function expm is based on this idea (and some clever tricks). Ingredients: exp(a) = exp(a/t) t. We use it for t = 2 s, s N. exp(z) can be well approximated by a Padé fraction if z is small. The (k, m) Padé fraction of f (analytic in 0) is a rational function r k,m (z) = p k,m (z)/q k,m (z) of type (k, m), i.e., p k,m P k and q k,m P m, with f (z) r k,m (z) = O ( z k+m+1) as z 0. In other words, if we expand r k,m in a Taylor series at z 0 = 0 then its coefficients coincide with the Taylor coefficients of f up to the index k + m. The (k, m) Padé fraction is uniquely determined if it exists (provided we normalize the denominator by q k,m (0) = 1). Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 7 / 20
8 For f (z) = exp(z) the Padé fractions of all types exist and are known explicitly. Here, only diagonal Padé fractions, i.e., k = m, will be applied. We denote them by r m = p m /q m. Method: Choose scaling parameter s. Approximate exp(a/2 s ) by r m (A/2 s ) = p m (A/2 s )[q m (A/2 s )] 1 for some m. Then exp(a) is approximated by r m (A/2 s ) 2s (repeated squaring). s and m are chosen such that exp(a) is approximated with backward error bounded by the unit roundoff and with minimal cost. More tricks involved. Costs: at most 5 + log 2 ( A 1 /2) matrix multiplications, one solution of a system q m (B)X = p m (A). [Higham (2005)] Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 8 / 20
9 The Schur-Parlett algorithm Important ingredient: If [ ] A1,1 A A = 1,2 O A 2,2 with square diagonal blocks then [ f (A1,1 ) X f (A) = O f (A 2,2 ) where X solves the Sylvester equation ], A 1,1 X X A 2,2 = f (A 1,1 )A 1,2 A 1,2 f (A 2,2 ). Proof. Compare blocks in Af (A) = f (A)A. Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 9 / 20
10 More general, if A 1,1 A 1,2 A 1,b O A 2,2 A 2,b A =..... O O A b,b is block triangular with square diagonal blocks then F 1,1 F 1,2 F 1,b O F 2,2 F 2,b f (A) =..... O O F b,b (partitioning conformal to that of A), where F j, j = f (A j,j ) (j = 1, 2,..., b), F i, j solves A i,i F i,j F i,j A j,j = F i,i A i,j A i,j F j,j + (F i,k A k,j A i,k F k,j ) k=1 (i = j 1, j 2,..., 1) [Parlett (1976)]. Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 10 / 20 j 1
11 An aside: Functions of 2-by-2 triangular matrices Let [ ] λ1 α A = 0 λ 2 and f be defined on Λ(A). Then f (A) = f (λ 1) α f (λ 2) f (λ 1 ) λ 2 λ 1 0 f (λ 2 ) if λ 1 λ 2, and f (A) = f (λ) αf (λ) 0 f (λ) if λ 1 = λ 2 = λ. Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 11 / 20
12 Problem: Solution of AX XB = C is ill-conditioned if Λ(A) and Λ(B) are not well separated. The Schur-Parlett algorithm: Compute the Schur form of A: T = U H AU. Reorder Schur form (swapping) to T = [ T i,j ] = V H TV such that min{ λ µ : λ Λ( T i,i ), µ Λ( T j,j ), i j} > δ (usually, δ = 0.1) (separation between blocks), for every block T i,i of size bigger than 1: For each λ Λ( T i,i ) there is µ Λ( T i,i ) such that λ µ δ (separation within blocks). Approximate f ( T i,i ) by a truncated Taylor series. Use the Parlett recursion to compute the off-diagonal blocks of f ( T ). Transform back. [Davies & Higham (2005)] Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 12 / 20
13 Newton s method The sign function is defined by { +1, if Real(z) > 0, sign(z) = 1, if Real(z) < 0. (The sign function is not defined on the imaginary axis). For a matrix A C n n without purely imaginary eigenvalues the matrix sign function sign(a) can then be determined as follows: First compute the Jordan canonical form of A, [ ] J+ A = T T 1, where J + [J ] collects all Jordan blocks belonging to eigenvalues with positive [negative] real parts. Then [ ] I sign(a) = T T 1. I J Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 13 / 20
14 The matrix sign function has important applications. Assume that we want to solve the Sylvester equation AX + XB = C, A C m m, B C n n, C C m n. Additionally, we suppose that sign(a) = I m and sign(b) = I n which is fulfilled, e.g., if A is positive real (i.e., Real(λ) > 0 for all λ Λ(A)) and if the Lyapunov equation AX + XA H = C is solved. Now [ ] [ ] [ ] [ A C Im X A O Im X = O B O O B O and thus, I n I n ] 1 [ A C sign O B ] [ Im X = O I n ] [ Im O O I n ] [ Im X O I n ] 1 [ Im 2X = O I n ]. Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 14 / 20
15 A similar approach leads to the solution of the algebraic Riccati equation XFX A H X XA = G [Roberts (1980)]. Since sign(a) solves X 2 I = O, the Newton s method ( ) X m+1 = 1 2 X m + Xm 1, X 0 = A, is one way to compute sign(a): If A has no purely imaginary eigenvalues then {X m } converges quadratically to sign(a), (see [Higham (2008)]). X m+1 sign(a) Xm X m sign(a) 2 Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 15 / 20
16 Trapezoidal rule + conformal maps Under the usual assumptions on f, A and Γ f (A)b = 1 f (ζ)(ζi A) 1 b dζ =: 2πi Γ 1 g(ζ) dζ. 2πi Γ Here, Γ = {ζ : ζ α = ρ}, i.e., ζ(θ) = α + ρ exp(iθ), 0 θ 2π, f (A)b = 1 2π 2π 0 (ζ(θ) α)g(ζ(θ)) dθ (integrand is periodic function of θ with period 2π). Apply p-point trapezoidal rule, p 1 f (A)b 1 (ζ k α)g(ζ k ), with ζ k α = ρ exp(2πki/m). p k=0 Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 16 / 20
17 Apply this to f analytic in C \ (, 0], A with Λ(A) [m, M] R: Figure from [Hale, Higham & Trefethen (2008)] Does not work, i.e., slow convergence, unless m 0. Idee: Change of variables, goes back to [Iri, Moriguti & Takasawa (1970)], [Takahasi & Mori (1973)],..., technique described here due to [Hale, Higham & Trefethen (2008)]. Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 17 / 20
18 Construct conformal map z = z(s) from annulus onto C \ ((, 0] [m, M]) and integrate with respect to s. p 1 Figure from [Hale, Higham & Trefethen (2008)] f (A)b f p (A, b) := 1 (ζ(s k ) α)g(ζ(s k ))ζ (s k ). p k=0 Theorem [Hale, Higham & Trefethen 2008] f (A)b f p (A, b) = O(exp( π 2 p/(log(m/m) + 3))). Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 18 / 20
19 Figure from [Hale, Higham & Trefethen (2008)] Method can be enhanced for f without singularities in (, 0) and algebraic branch points in 0. Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 19 / 20
20 Pointers to the literature P. I. Davies and N. J. Higham. A Schur Parlett algorithm for computing matrix functions. SIAM J. Matrix Anal. Appl. 25, (2003). P. I. Davies and N. J. Higham. Computing f (A)b for matrix functions f. In: QCD and Numerical Analysis III (A. Boriçi, A. Frommer, B. Joó, A. Kennedy, and B. Penleton, eds.), Lecture Notes in Computational Science and Engineering vol. 47, Springer-Verlag, Berlin 2005, N. Hale, N. J. Higham, and L. N. Trefethen. Computing A α, log(a), and related matrix functions by contour integrals. SIAM J. Numer. Anal. 46, (2008). N. J. Higham. The scaling and squaring method for the matrix exponential revisited. SIAM J. Matrix Anal. Appl. 26, (2005). M. Iri, S. Moriguti and Y. Takasawa. On a certain quadrature formula (in Japanese). Kokyuroku RIMS, Kyoto Univ (1970). Translated into English in J. Comp. Appl. Math. 17, 2 20 (1987). B. N. Parlett. A recurrence among the elements of functions of triangular matrices. Linear Algebra Appl. 14, (1976). J. D. Roberts. Linear model reduction and solution of the agebraic Riccati equation by use of the sign function. Int. J. Control 32, (1980). H. Takahasi and M. Mori. Quadrature formulas obtained by variable transformation. Numer. Math. 12, (1973). Michael Eiermann (TU Freiberg) Matrix Functions WS 2010/11 20 / 20
Lecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationKrylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms
Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 4. Monotonicity of the Lanczos Method Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische
More informationNumerical methods for matrix functions
Numerical methods for matrix functions SF2524 - Matrix Computations for Large-scale Systems Lecture 13 Numerical methods for matrix functions 1 / 26 Reading material Lecture notes online Numerical methods
More informationThe matrix sign function
The matrix sign function 1 Re x > 0, sign(x) = 1 Re x < 0, undefined Re x = 0. Suppose the Jordan form of A is reblocked as A = [ V 1 V 2 ] [ J 1 J 2 ] [ V 1 V 2 ] 1, where J 1 contains all eigenvalues
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationMatrix Functions and their Approximation by. Polynomial methods
[ 1 / 48 ] University of Cyprus Matrix Functions and their Approximation by Polynomial Methods Stefan Güttel stefan@guettel.com Nicosia, 7th April 2006 Matrix functions Polynomial methods [ 2 / 48 ] University
More informationMATH 5524 MATRIX THEORY Problem Set 4
MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.
More informationPDEs, Matrix Functions and Krylov Subspace Methods
PDEs, Matrix Functions and Krylov Subspace Methods Oliver Ernst Institut für Numerische Mathematik und Optimierung TU Bergakademie Freiberg, Germany LMS Durham Symposium Computational Linear Algebra for
More informationFunctions of Matrices. Nicholas J. Higham. November MIMS EPrint: Manchester Institute for Mathematical Sciences School of Mathematics
Functions of Matrices Nicholas J. Higham November 2005 MIMS EPrint: 2005.21 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: And
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationExploiting off-diagonal rank structures in the solution of linear matrix equations
Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)
More informationMatrix Equations and and Bivariate Function Approximation
Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix
More informationINTRODUCTION TO PADÉ APPROXIMANTS. E. B. Saff Center for Constructive Approximation
INTRODUCTION TO PADÉ APPROXIMANTS E. B. Saff Center for Constructive Approximation H. Padé (1863-1953) Student of Hermite Histhesiswon French Academy of Sciences Prize C. Hermite (1822-1901) Used Padé
More informationKrylov methods for the computation of matrix functions
Krylov methods for the computation of matrix functions Jitse Niesen (University of Leeds) in collaboration with Will Wright (Melbourne University) Heriot-Watt University, March 2010 Outline Definition
More informationMatrix functions and their approximation. Krylov subspaces
[ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationComputing the pth Roots of a Matrix. with Repeated Eigenvalues
Applied Mathematical Sciences, Vol. 5, 2011, no. 53, 2645-2661 Computing the pth Roots of a Matrix with Repeated Eigenvalues Amir Sadeghi 1, Ahmad Izani Md. Ismail and Azhana Ahmad School of Mathematical
More informationS N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz
Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner
More informationComputing the Action of the Matrix Exponential
Computing the Action of the Matrix Exponential Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Awad H. Al-Mohy 16th ILAS
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More informationarxiv: v1 [hep-lat] 2 May 2012
A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationSpeeding up numerical computations via conformal maps
Speeding up numerical computations via conformal maps Nick Trefethen, Oxford University Thanks to Nick Hale, Nick Higham and Wynn Tee 1/35 SIAM 1997 SIAM 2000 Cambridge 2003 2/35 Princeton 2005 Bornemann
More informationON THE MATRIX EQUATION XA AX = X P
ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationThe antitriangular factorisation of saddle point matrices
The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block
More informationNumerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018
Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start
More informationComputing Real Logarithm of a Real Matrix
International Journal of Algebra, Vol 2, 2008, no 3, 131-142 Computing Real Logarithm of a Real Matrix Nagwa Sherif and Ehab Morsy 1 Department of Mathematics, Faculty of Science Suez Canal University,
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationCONSTRUCTIVE APPROXIMATION 2001 Springer-Verlag New York Inc.
Constr. Approx. (2001) 17: 267 274 DOI: 10.1007/s003650010021 CONSTRUCTIVE APPROXIMATION 2001 Springer-Verlag New York Inc. Faber Polynomials Corresponding to Rational Exterior Mapping Functions J. Liesen
More informationEquations in Quadratic Form
Equations in Quadratic Form MATH 101 College Algebra J. Robert Buchanan Department of Mathematics Summer 2012 Objectives In this lesson we will learn to: make substitutions that allow equations to be written
More informationDiophantine Equations
Diophantine Equations Michael E. Pohst Institut für Mathematik Technische Universität Berlin May 24, 2013 Mordell s equation y 2 = x 3 + κ is one of the classical diophantine equations. In his famous book
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationSome Formulas for the Principal Matrix pth Root
Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and
More informationDefinite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra
Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear
More informationMTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)
MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and
More information1. Introduction. It is well known that an analytic function f of a square matrix A can be represented as a contour integral,
SIAM J. NUMER. ANAL. Vol. 46, No. 5, pp. 2505 2523 c 2008 Society for Industrial and Applied Mathematics COMPUTING A α, log(a), AND RELATED MATRIX FUNCTIONS BY CONTOUR INTEGRALS NICHOLAS HALE, NICHOLAS
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationA Continuation Approach to a Quadratic Matrix Equation
A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationDEFLATED RESTARTING FOR MATRIX FUNCTIONS
DEFLATED RESTARTING FOR MATRIX FUNCTIONS M. EIERMANN, O. G. ERNST AND S. GÜTTEL Abstract. We investigate an acceleration technique for restarted Krylov subspace methods for computing the action of a function
More informationQuadratic Matrix Polynomials
Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics
More informationSOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS
SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS ERNST HAIRER AND PIERRE LEONE Abstract. We prove that to every rational function R(z) satisfying R( z)r(z) = 1, there exists a symplectic Runge-Kutta method
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationNumerical Simulation of Spin Dynamics
Numerical Simulation of Spin Dynamics Marie Kubinova MATH 789R: Advanced Numerical Linear Algebra Methods with Applications November 18, 2014 Introduction Discretization in time Computing the subpropagators
More information1 Sylvester equations
1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,
More informationAccurate evaluation of divided differences for polynomial interpolation of exponential propagators
Computing 80, 189 201 (2007) DOI 10.1007/s00607-007-0227-1 Printed in The Netherlands Accurate evaluation of divided differences for polynomial interpolation of exponential propagators M. Caliari, Padua
More informationLinear Algebra Review (Course Notes for Math 308H - Spring 2016)
Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,
More information12. Cholesky factorization
L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith
More informationBALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS
BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationThe Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications
MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationResearch Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics
Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationA Divide-and-Conquer Algorithm for Functions of Triangular Matrices
A Divide-and-Conquer Algorithm for Functions of Triangular Matrices Ç. K. Koç Electrical & Computer Engineering Oregon State University Corvallis, Oregon 97331 Technical Report, June 1996 Abstract We propose
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More information7 Planar systems of linear ODE
7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationState will have dimension 5. One possible choice is given by y and its derivatives up to y (4)
A Exercise State will have dimension 5. One possible choice is given by y and its derivatives up to y (4 x T (t [ y(t y ( (t y (2 (t y (3 (t y (4 (t ] T With this choice we obtain A B C [ ] D 2 3 4 To
More informationFINITE-DIMENSIONAL LINEAR ALGEBRA
DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup
More informationCONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES
European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT
More informationZeros of Polynomials: Beware of Predictions from Plots
[ 1 / 27 ] University of Cyprus Zeros of Polynomials: Beware of Predictions from Plots Nikos Stylianopoulos a report of joint work with Ed Saff Vanderbilt University May 2006 Five Plots Fundamental Results
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationOn-the-fly backward error estimate for matrix exponential approximation by Taylor algorithm
On-the-fly backward error estimate for matrix exponential approximation by Taylor algorithm M. Caliari a,, F. Zivcovich b a Department of Computer Science, University of Verona, Italy b Department of Mathematics,
More informationOn rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro
On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational
More informationAn error estimate for matrix equations
Applied Numerical Mathematics 50 (2004) 395 407 www.elsevier.com/locate/apnum An error estimate for matrix equations Yang Cao, Linda Petzold 1 Department of Computer Science, University of California,
More informationGeneralized Shifted Inverse Iterations on Grassmann Manifolds 1
Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationThe Complex Step Approximation to the Fréchet Derivative of a Matrix Function. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2009.
The Complex Step Approximation to the Fréchet Derivative of a Matrix Function Al-Mohy, Awad H. and Higham, Nicholas J. 2009 MIMS EPrint: 2009.31 Manchester Institute for Mathematical Sciences School of
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationMORE CONSEQUENCES OF CAUCHY S THEOREM
MOE CONSEQUENCES OF CAUCHY S THEOEM Contents. The Mean Value Property and the Maximum-Modulus Principle 2. Morera s Theorem and some applications 3 3. The Schwarz eflection Principle 6 We have stated Cauchy
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationMath 3108: Linear Algebra
Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationOnline Exercises for Linear Algebra XM511
This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2
More informationPhys 201. Matrices and Determinants
Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationPart IB Numerical Analysis
Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More information8.3 Partial Fraction Decomposition
8.3 partial fraction decomposition 575 8.3 Partial Fraction Decomposition Rational functions (polynomials divided by polynomials) and their integrals play important roles in mathematics and applications,
More information1 Holomorphic functions
Robert Oeckl CA NOTES 1 15/09/2009 1 1 Holomorphic functions 11 The complex derivative The basic objects of complex analysis are the holomorphic functions These are functions that posses a complex derivative
More informationStability and Inertia Theorems for Generalized Lyapunov Equations
Published in Linear Algebra and its Applications, 355(1-3, 2002, pp. 297-314. Stability and Inertia Theorems for Generalized Lyapunov Equations Tatjana Stykel Abstract We study generalized Lyapunov equations
More informationFactorized Solution of Sylvester Equations with Applications in Control
Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More information