Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants
|
|
- Evelyn Dawson
- 5 years ago
- Views:
Transcription
1 Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants Nick Higham School of Mathematics The University of Manchester Approximation and Iterative Methods (AMI06) on the occasion of the retirement of Professor Claude Brezinski, Lille, June 22 23, 2006
2 What is Claude Photographing?
3 Overview Iterations X k+1 = g(x k ) for computing A, sign(a) ( p A, unitary polar factor,... ). Padé iterations for sign(a). Avoid Jordan form in convergence proofs. Reduce to: Does G k tend to zero? Can reduce stability to: Is H k bounded? Connections: matrix sign and matrix square root. Convergence of a linear iteration for A. MIMS Nick Higham Matrix functions 3 / 33
4 Overview Iterations X k+1 = g(x k ) for computing A, sign(a) ( p A, unitary polar factor,... ). Padé iterations for sign(a). Avoid Jordan form in convergence proofs. Reduce to: Does G k tend to zero? Can reduce stability to: Is H k bounded? Connections: matrix sign and matrix square root. Convergence of a linear iteration for A. Book in preparation: Functions of a Matrix: Theory and Computation MIMS Nick Higham Matrix functions 3 / 33
5 Matrix Sign Function Let A C n n have no pure imaginary eigenvalues and let A = ZJZ 1 be a Jordan canonical form with [ p q ] p J J = 1 0, Λ(J q 0 J 1 ) LHP, Λ(J 2 ) RHP. 2 [ Ip 0 sign(a) = Z 0 I q ] Z 1. MIMS Nick Higham Matrix functions 4 / 33
6 Matrix Sign Function Let A C n n have no pure imaginary eigenvalues and let A = ZJZ 1 be a Jordan canonical form with [ p q ] p J J = 1 0, Λ(J q 0 J 1 ) LHP, Λ(J 2 ) RHP. 2 [ Ip 0 sign(a) = Z 0 I q ] Z 1. sign(a) = A(A 2 ) 1/2. MIMS Nick Higham Matrix functions 4 / 33
7 Matrix Sign Function Let A C n n have no pure imaginary eigenvalues and let A = ZJZ 1 be a Jordan canonical form with [ p q ] p J J = 1 0, Λ(J q 0 J 1 ) LHP, Λ(J 2 ) RHP. 2 [ Ip 0 sign(a) = Z 0 I q ] Z 1. sign(a) = A(A 2 ) 1/2. sign(a) = 2 π A (t 2 I + A 2 ) 1 dt. 0 MIMS Nick Higham Matrix functions 4 / 33
8 Newton s Method for Sign Convergence X k+1 = 1 2 (X k + X 1 k ), X 0 = A. Let S := sign(a), G := (A S)(A + S) 1. Then X k = (I G 2k ) 1 (I + G 2k )S, Ei vals of G are (λ i sign(λ i ))/(λ i + sign(λ i )). Hence ρ(g) < 1 and G k 0. Easy to show X k+1 S 1 1 Xk X k S 2. 2 MIMS Nick Higham Matrix functions 5 / 33
9 Matrix Square Root X is a square root of A C n n X 2 = A. Number of square roots may be zero, finite or infinite. For A with no eigenvalues on R = {x R : x 0}, denote by A 1/2 the principal square root: unique square root with spectrum in open right half-plane. MIMS Nick Higham Matrix functions 6 / 33
10 Newton s Method for Square Root Newton s method: X 0 given, Solve X k E k + E k X k = A X 2 k X k+1 = X k + E k Assume AX 0 = X 0 A. Then can show } k = 0, 1, 2,... X k+1 = 1 2 (X k + X 1 k A). ( ) For nonsingular A, local quadratic cgce of full Newton to a primary square root. To which square root do the iterations converge? ( ) can converge when full Newton breaks down. Lack of symmetry in ( ). MIMS Nick Higham Matrix functions 7 / 33
11 Convergence (Jordan) Assume X 0 = p(a) for some poly p. Let Z 1 AZ = J be Jordan canonical form and set Z 1 X k Z = Y k. Then Y k+1 = 1 2 (Y k + Y 1 k J), Y 0 = J. Convergence of diagonal of Y k reduces to scalar case: Heron: y k+1 = 1 2 (y k + λyk ), y 0 = λ. Can show that off-diagonal converges. Problem: analysis does not generalize to X 0 A = AX 0! X 0 not necessarily a polynomial in A. MIMS Nick Higham Matrix functions 8 / 33
12 Convergence (via Sign) Theorem Let A C n n have no e vals on R. The Newton square root iterates X k with X 0 A = AX 0 are related to the Newton sign iterates S k+1 = 1 2 (S k + S 1 k ), S 0 = A 1/2 X 0 by X k A 1/2 S k. Hence, provided A 1/2 X 0 has no pure imag e vals, X k are defined and X k A 1/2 sign(s 0 ) quadratically. : X k A 1/2 if Λ(A 1/2 X 0 ) RHP, e.g., if X 0 = A. MIMS Nick Higham Matrix functions 9 / 33
13 More Sign Iterations Start with Newton: x k+1 = 1 2 (x k + x 1 k ). MIMS Nick Higham Matrix functions 10 / 33
14 More Sign Iterations Start with Newton: x k+1 = 1 2 (x k + x 1 k ). Invert: x k+1 = 2x k x 2 k + 1. MIMS Nick Higham Matrix functions 10 / 33
15 More Sign Iterations Start with Newton: x k+1 = 1 2 (x k + x 1 k ). Invert: Halley: x k+1 = 2x k x 2 k + 1. x k+1 = x k(3 + xk 2) xk 2 MIMS Nick Higham Matrix functions 10 / 33
16 More Sign Iterations Start with Newton: x k+1 = 1 2 (x k + x 1 k ). Invert: Halley: Newton Schulz: x k+1 = 2x k x 2 k + 1. x k+1 = x k(3 + xk 2) xk 2 x k+1 = x k 2 (3 x 2 k ). MIMS Nick Higham Matrix functions 10 / 33
17 The Padé Family: Derivation For non-pure imaginary z C, ξ := 1 z 2, sign(z) = z z 2 = z 1 (1 z2 ) = z 1 ξ, [l/m] Padé approximant r lm (ξ) = p lm (ξ)/q lm (ξ) to h(ξ) = (1 ξ) 1/2 known explicitly. Kenney & Laub (1991): set up iterations x k+1 = f lm (x k ) := x k p lm (1 x 2 k ) q lm (1 x 2 k ), x 0 = a. MIMS Nick Higham Matrix functions 11 / 33
18 The Padé Family: Examples l m = 0 m = 1 m = 2 0 x 1 2 x 2 (3 x2 ) x 8 (15 10x2 + 3x 4 ) 2x 1 + x 2 8x 3 + 6x 2 x 4 x(3 + x 2 ) 1 + 3x 2 4x(1 + x 2 ) 1 + 6x 2 + x 4 x ( x 2 x 4 ) x(5 + 10x 2 + x 4 ) x x 2 + 5x 4 MIMS Nick Higham Matrix functions 12 / 33
19 The Padé Family: Convergence Theorem (Kenney & Laub) Let A C n n have no pure imag e vals. Let l + m > 0. (a) For l m 1, if I A 2 < 1 then X k sign(a) as k and I X 2 k < I A2 (l+m+1)k. (b) For l = m 1 and l = m, (S X k )(S + X k ) 1 = [ (S A)(S + A) 1] (l+m+1) k and hence X k sign(a) as k. MIMS Nick Higham Matrix functions 13 / 33
20 The Padé Family: Examples l m = 0 m = 1 m = 2 0 x 2x 8x 1 + x x 2 x x 2 (3 x2 ) x 8 (15 10x2 + 3x 4 ) x(3 + x 2 ) 1 + 3x 2 4x(1 + x 2 ) 1 + 6x 2 + x 4 x ( x 2 x 4 ) x(5 + 10x 2 + x 4 ) x x 2 + 5x 4 MIMS Nick Higham Matrix functions 14 / 33
21 Principal Padé Iterations l m = 0 m = 1 m = 2 0 x 2x 1 + x 2 8x 3 + 6x 2 x 4 1 x 2 (3 x2 ) x(3 + x 2 ) 1 + 3x 2 4x(1 + x 2 ) 1 + 6x 2 + x 4 2 x 8 (15 10x2 + 3x 4 ) x ( x 2 x 4 ) x(5 + 10x 2 + x 4 ) x x 2 + 5x 4 MIMS Nick Higham Matrix functions 15 / 33
22 Principal Padé Iteration Properties Define g r (x) taking from the main diagonal and superdiagonal of Padé table in zig-zag fashion. Theorem (Kenney & Laub) (a) g r (x) = (1 + x)r (1 x) r (1 + x) r + (1 x) r. (b) g r (x) = tanh(r arctanh(x)). (c) g r (g s (x)) = g rs (x) (semigroup property). (d) g r (x) = 2 r r 2 2 i=0 sin 2( ) (2i+1)π 2r x + cos 2 ( (2i+1)π 2r ) x 2. MIMS Nick Higham Matrix functions 16 / 33
23 g r (x) = tanh(r arctanh(x)) r = 2 r = 4 r = 8 r =
24 Class of Square Root Iterations Theorem (H, Mackey, Mackey & Tisseur, 2005) Suppose the iteration X k+1 = X k h(x 2 k ), X 0 = A converges to sign(a) with order m. If Λ(A) R = and Y k+1 = Y k h(z k Y k ), Y 0 = A, Z k+1 = h(z k Y k )Z k, Z 0 = I, then Y k A 1/2 and Z k A 1/2 as k with order m. MIMS Nick Higham Matrix functions 18 / 33
25 Class of Square Root Iterations Theorem (H, Mackey, Mackey & Tisseur, 2005) Suppose the iteration X k+1 = X k h(x 2 k ), X 0 = A converges to sign(a) with order m. If Λ(A) R = and Y k+1 = Y k h(z k Y k ), Y 0 = A, Z k+1 = h(z k Y k )Z k, Z 0 = I, then Y k A 1/2 and Z k A 1/2 as k with order m. Proof makes use of sign ([ ]) [ 0 A = I 0 0 A 1/2 A 1/2 0 ]. MIMS Nick Higham Matrix functions 18 / 33
26 Examples Newton Sign: X k+1 = X k 1(I + (X 2 2 k ) 1 ) X k h(xk 2), X 0 = A. Y k+1 = 1Y 2 k(i + (Z k Y k ) 1 ) = 1(Y 2 k + Z 1 k ), Y 0 = A. Denman Beavers sq root iteration: Y k+1 = 1 ( ) Yk + Z 1 k, 2 Y0 = A, Z k+1 = 1 ( ) Zk + Y 1 k, 2 Z0 = I. MIMS Nick Higham Matrix functions 19 / 33
27 Examples Newton Sign: X k+1 = X k 1(I + (X 2 2 k ) 1 ) X k h(xk 2), X 0 = A. Y k+1 = 1Y 2 k(i + (Z k Y k ) 1 ) = 1(Y 2 k + Z 1 k ), Y 0 = A. Denman Beavers sq root iteration: Y k+1 = 1 ( ) Yk + Z 1 k, 2 Y0 = A, Z k+1 = 1 ( ) Zk + Y 1 k, 2 Z0 = I. Newton Schulz sq root iteration: Y k+1 = 1 2 Y k (3I Z k Y k ), Y 0 = A, Z k+1 = 1 2 (3I Z ky k )Z k, Z 0 = I. MIMS Nick Higham Matrix functions 19 / 33
28 History of Newton Sqrt Instability X k+1 = 1(X 2 k + X 1 k A). Instability of Newton noted by Laasonen (1958): Newton s method if carried out indefinitely, is not stable whenever the ratio of the largest to the smallest eigenvalue of A exceeds the value 9. Described informally by Blackwell (1985) in Mathematical People: Profiles and Interviews. Analyzed by H (1986) for diagonalizable A by deriving error amplification factors. MIMS Nick Higham Matrix functions 21 / 33
29 Stability Definition The iteration X k+1 = g(x k ) is stable in a nbhd of a fixed point X if Fréchet derivative dg X has bounded powers. MIMS Nick Higham Matrix functions 22 / 33
30 Stability Definition The iteration X k+1 = g(x k ) is stable in a nbhd of a fixed point X if Fréchet derivative dg X has bounded powers. For scalars & superlinear g, dg X = g (x) = 0. MIMS Nick Higham Matrix functions 22 / 33
31 Stability Definition The iteration X k+1 = g(x k ) is stable in a nbhd of a fixed point X if Fréchet derivative dg X has bounded powers. For scalars & superlinear g, dg X = g (x) = 0. Let X 0 = X + E 0, E k := X k X. Then X k+1 = g(x k ) = g(x + E k ) = g(x) + dg X (E k ) + o( E k ). So, since g(x) = X, E k+1 = dg X (E k ) + o( E k ). If dgx i (E) c, then recurring leads to E k c E 0 + kc o( E 0 ). MIMS Nick Higham Matrix functions 22 / 33
32 Stability of Newton Square Root g(x) = 1 2 (X + X 1 A). dg X (E) = 1 2 (E X 1 EX 1 A). Relevant fixed point: X = A 1/2. dg A 1/2(E) = 1 2 (E A 1/2 EA 1/2 ). Ei vals of dg A 1/2 are For stability we need 1 (1 λ 1/2 i λ 1/2 j ), i, j = 1: n. 2 1 max i,j 2 1 λ 1/2 For hpd A, need κ 2 (A) < 9. i λ 1/2 j < 1. MIMS Nick Higham Matrix functions 23 / 33
33 Advantages Uses only Fréchet derivative of g. No additional assumptions on A. Perturbation analysis is all in the definition. General, unifying approach. Facilitates analysis of families of iterations. MIMS Nick Higham Matrix functions 24 / 33
34 Stability of Sign Iterations (H) Theorem Let X k+1 = g(x k ) be any superlinearly convergent iteration for S = sign(x 0 ). Then dg S (E) = L S (E) = 1 2 (E SES), where L S is the Fréchet derivative of the matrix sign function at S. Hence dg S is idempotent (dg S dg S = dg S ) and the iteration is stable. All sign iterations are automatically stable. MIMS Nick Higham Matrix functions 25 / 33
35 Theorem (H, Mackey, Mackey & Tisseur, 2005) Consider the iteration function G(Y, Z) = [ Yh(ZY) h(zy)z where X k+1 = X k h(xk 2 ) is any superlinearly cgt iteration for sign(x 0 ). Any pair P = (B, B 1 ) is a fixed point for G, and the Fréchet derivative of G at P is dg P (E, F) = 1 2 [ ], E BFB F B 1 EB 1 dg P is idempotent and hence the iteration is stable. ]. In particular: Denman Beavers iteration is stable. MIMS Nick Higham Matrix functions 26 / 33
36 Binomial Iteration for (I C) 1/2 Suppose ρ(c) < 1. Define (I C) 1/2 =: I P and square to obtain P 2 2P = C. Suggests: I P k reproduces (I C) 1/2 = P k+1 = 1 2 (C + P2 k ), P 0 = 0. j=0 up to terms of order C k. ( 1 ) 2 ( C) j I j α j C j, (α j > 0) j=1 MIMS Nick Higham Matrix functions 27 / 33
37 Binomial Iteration Convergence (1) Theorem P k+1 = 1 2 (C + P2 k), P 0 = 0. Let C R n n satisfy C 0 and ρ(c) < 1 and write (I C) 1/2 = I P. Then P k P with 0 P k P k+1 P, k 0. MIMS Nick Higham Matrix functions 28 / 33
38 Binomial Iteration Convergence (1) Theorem P k+1 = 1 2 (C + P2 k), P 0 = 0. Let C R n n satisfy C 0 and ρ(c) < 1 and write (I C) 1/2 = I P. Then P k P with 0 P k P k+1 P, k 0. Let X k = P k /2. Then X k+1 = X 2 k + C/4. MIMS Nick Higham Matrix functions 28 / 33
39 Binomial Iteration Convergence (2) Theorem (H) Let the e vals λ of C C n n lie in the cardioid { 2z z 2 : z C, z < 1 }. Then (I C) 1/2 =: I P exists and P k P linearly MIMS Nick Higham Matrix functions 29 / 33
40 Paradox (I C) 1/2 = ( 1) 2 j=0 j ( C) j has radius of convergence 1. P k+1 = 1 2 (C + P2 k ), P 0 = 0: P k reproduces first k + 1 terms of the power series. P k converges in the cardioid unit circle. MIMS Nick Higham Matrix functions 30 / 33
41 Conclusions Stability equivalent to power boundedness of Fréchet derivative. Better understanding of convergence analysis (prefer matrix powers to Jordan form.) Matrix sign function is fundamental and connections with sqrt can be exploited. Padé iterations for sign have many interesting properties. Analysis of linearly convergent iterations can be tricky. MIMS Nick Higham Matrix functions 31 / 33
42 What is Claude Photographing?
43
The matrix sign function
The matrix sign function 1 Re x > 0, sign(x) = 1 Re x < 0, undefined Re x = 0. Suppose the Jordan form of A is reblocked as A = [ V 1 V 2 ] [ J 1 J 2 ] [ V 1 V 2 ] 1, where J 1 contains all eigenvalues
More informationMATH 5524 MATRIX THEORY Problem Set 4
MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationDefinite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra
Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationPadé approximants of (1 z) 1/p and their applications to computing the matrix p-sector function and the matrix pth roots.
Padé approximants of (1 z) 1/p and their applications to computing the matrix p-sector function and the matrix pth roots Krystyna Ziȩtak March 17, 2015 Outline 1 Coauthors 2 sign and p-sector functions
More informationAlgorithms for Solving the Polynomial Eigenvalue Problem
Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey
More informationComputing the pth Roots of a Matrix. with Repeated Eigenvalues
Applied Mathematical Sciences, Vol. 5, 2011, no. 53, 2645-2661 Computing the pth Roots of a Matrix with Repeated Eigenvalues Amir Sadeghi 1, Ahmad Izani Md. Ismail and Azhana Ahmad School of Mathematical
More informationStable iterations for the matrix square root
Numerical Algorithms 5 (997) 7 4 7 Stable iterations for the matrix square root Nicholas J. Higham Department of Mathematics, University of Manchester, Manchester M3 9PL, England E-mail: higham@ma.man.ac.u
More informationQuadratic Matrix Polynomials
Research Triangularization Matters of Quadratic Matrix Polynomials February 25, 2009 Nick Françoise Higham Tisseur Director School of of Research Mathematics The University of Manchester School of Mathematics
More informationFunctions of Matrices. Nicholas J. Higham. November MIMS EPrint: Manchester Institute for Mathematical Sciences School of Mathematics
Functions of Matrices Nicholas J. Higham November 2005 MIMS EPrint: 2005.21 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: And
More informationLecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008
Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity
More informationScaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem
Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/
More informationSTRUCTURED FACTORIZATIONS IN SCALAR PRODUCT SPACES
SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 3, pp. 821 850 c 2006 Society for Industrial and Applied Mathematics STRUCTURED FACTORIZATIONS IN SCALAR PRODUCT SPACES D. STEVEN MACKEY, NILOUFER MACKEY, AND FRANÇOISE
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationThe Fréchet derivative of a generalized matrix function. Network Science meets Matrix Functions Univerity of Oxford, 1-2/9/16
The Fréchet derivative of a generalized matrix function Vanni Noferini (University of Essex) Network Science meets Matrix Functions Univerity of Oxford, 1-2/9/16 September 1st, 2016 Vanni Noferini The
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationPerturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop
Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael
More informationSolving the Polynomial Eigenvalue Problem by Linearization
Solving the Polynomial Eigenvalue Problem by Linearization Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Ren-Cang Li,
More informationLinear Algebra Solutions 1
Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationLecture 11: Diagonalization
Lecture 11: Elif Tan Ankara University Elif Tan (Ankara University) Lecture 11 1 / 11 Definition The n n matrix A is diagonalizableif there exits nonsingular matrix P d 1 0 0. such that P 1 AP = D, where
More informationKernel Methods. Charles Elkan October 17, 2007
Kernel Methods Charles Elkan elkan@cs.ucsd.edu October 17, 2007 Remember the xor example of a classification problem that is not linearly separable. If we map every example into a new representation, then
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationComputing Real Logarithm of a Real Matrix
International Journal of Algebra, Vol 2, 2008, no 3, 131-142 Computing Real Logarithm of a Real Matrix Nagwa Sherif and Ehab Morsy 1 Department of Mathematics, Faculty of Science Suez Canal University,
More informationSolving Polynomial Eigenproblems by Linearization
Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise
More informationSome Useful Results in the Theory of Matrix Functions
Some Useful Results in the Theory of Matrix Functions Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Matrix Analysis and Applications,
More informationHermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise
Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification Al-Ammari, Maha and Tisseur, Francoise 2010 MIMS EPrint: 2010.9 Manchester Institute for Mathematical Sciences
More informationFinite dimensional indefinite inner product spaces and applications in Numerical Analysis
Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationMath 489AB Exercises for Chapter 1 Fall Section 1.0
Math 489AB Exercises for Chapter 1 Fall 2008 Section 1.0 1.0.2 We want to maximize x T Ax subject to the condition x T x = 1. We use the method of Lagrange multipliers. Let f(x) = x T Ax and g(x) = x T
More informationTHE MATRIX EIGENVALUE PROBLEM
THE MATRIX EIGENVALUE PROBLEM Find scalars λ and vectors x 0forwhich Ax = λx The form of the matrix affects the way in which we solve this problem, and we also have variety as to what is to be found. A
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationThe Complex Step Approximation to the Fréchet Derivative of a Matrix Function. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2009.
The Complex Step Approximation to the Fréchet Derivative of a Matrix Function Al-Mohy, Awad H. and Higham, Nicholas J. 2009 MIMS EPrint: 2009.31 Manchester Institute for Mathematical Sciences School of
More informationNewton's method and secant methods: A long-standing. relationship from vectors to matrices
Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Newton's method and secant methods: A long-standing relationship from
More informationFunctions of a Matrix: Theory and Computation
Functions of a Matrix: Theory and Computation Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Landscapes in Mathematical Science, University
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationx n+1 = x n f(x n) f (x n ), n 0.
1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.
More informationLecture 2: Computing functions of dense matrices
Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationCheck that your exam contains 30 multiple-choice questions, numbered sequentially.
MATH EXAM SPRING VERSION A NAME STUDENT NUMBER INSTRUCTOR SECTION NUMBER On your scantron, write and bubble your PSU ID, Section Number, and Test Version. Failure to correctly code these items may result
More informationAdvanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur
Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationHow to Detect Definite Hermitian Pairs
How to Detect Definite Hermitian Pairs Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/ Joint work with Chun-Hua Guo and Nick
More informationResearch CharlieMatters
Research CharlieMatters Van Loan and the Matrix Exponential February 25, 2009 Nick Nick Higham Director Schoolof ofresearch Mathematics The School University of Mathematics of Manchester http://www.maths.manchester.ac.uk/~higham
More informationNonlinear Programming Algorithms Handout
Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are
More informationMath Abstract Linear Algebra Fall 2011, section E1 Practice Final. This is a (long) practice exam. The real exam will consist of 6 problems.
Math 416 - Abstract Linear Algebra Fall 2011, section E1 Practice Final Name: This is a (long) practice exam. The real exam will consist of 6 problems. In the real exam, no calculators, electronic devices,
More informationMatrix Functions: A Short Course. Higham, Nicholas J. and Lijing, Lin. MIMS EPrint:
Matrix Functions: A Short Course Higham, Nicholas J. and Lijing, Lin 2013 MIMS EPrint: 2013.73 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports
More informationImproved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm. Al-Mohy, Awad H. and Higham, Nicholas J. MIMS EPrint: 2011.
Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm Al-Mohy, Awad H. and Higham, Nicholas J. 2011 MIMS EPrint: 2011.83 Manchester Institute for Mathematical Sciences School of Mathematics
More informationNick Higham. Director of Research School of Mathematics
Exploiting Research Tropical Matters Algebra in Numerical February 25, Linear 2009 Algebra Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University of Mathematics
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationApproximating functions in Clifford algebras: What to do with negative eigenvalues? (Short version)
Approximating functions in Clifford algebras: What to do with negative eigenvalues? (Short version) Paul Leopardi Mathematical Sciences Institute, Australian National University. For presentation at AGACSE
More informationComputing the Action of the Matrix Exponential
Computing the Action of the Matrix Exponential Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Awad H. Al-Mohy 16th ILAS
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationBoolean Inner-Product Spaces and Boolean Matrices
Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationNumerical Methods for Differential Equations Mathematical and Computational Tools
Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationHomework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.
Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter 5 Nonlinear Equations Copyright c 2001 Reproduction permitted only for noncommercial, educational
More informationThe German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.
Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg
More informationComplex Variables...Review Problems (Residue Calculus Comments)...Fall Initial Draft
Complex Variables........Review Problems Residue Calculus Comments)........Fall 22 Initial Draft ) Show that the singular point of fz) is a pole; determine its order m and its residue B: a) e 2z )/z 4,
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationsystems of linear di erential If the homogeneous linear di erential system is diagonalizable,
G. NAGY ODE October, 8.. Homogeneous Linear Differential Systems Section Objective(s): Linear Di erential Systems. Diagonalizable Systems. Real Distinct Eigenvalues. Complex Eigenvalues. Repeated Eigenvalues.
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationEXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants
EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,
More informationKrylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms
Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische
More informationEigenpairs and Similarity Transformations
CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a
More informationHendrik De Bie. Hong Kong, March 2011
A Ghent University (joint work with B. Ørsted, P. Somberg and V. Soucek) Hong Kong, March 2011 A Classical FT New realizations of sl 2 in harmonic analysis A Outline Classical FT New realizations of sl
More informationMath 291-2: Final Exam Solutions Northwestern University, Winter 2016
Math 29-2: Final Exam Solutions Northwestern University, Winter 206 Determine whether each of the following statements is true or false f it is true, explain why; if it is false, give a counterexample
More informationOnline Exercises for Linear Algebra XM511
This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2
More informationResearch Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics
Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester
More informationCayley-Hamilton Theorem
Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationFinal Exam Practice Problems Answers Math 24 Winter 2012
Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the
More information1. In this problem, if the statement is always true, circle T; otherwise, circle F.
Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation
More informationWeek Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,
Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider
More informationMultiparameter eigenvalue problem as a structured eigenproblem
Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction
More informationJ-Orthogonal Matrices: Properties and Generation
SIAM REVIEW Vol. 45,No. 3,pp. 504 519 c 2003 Society for Industrial and Applied Mathematics J-Orthogonal Matrices: Properties and Generation Nicholas J. Higham Abstract. A real, square matrix Q is J-orthogonal
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationGENERAL ARTICLE Realm of Matrices
Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest
More informationUniversity of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012
University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 Name: Exam Rules: This is a closed book exam. Once the exam
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationSynopsis of Numerical Linear Algebra
Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More information