A Modified Method for Reconstructing Periodic Jacobi Matrices

Similar documents
The Lanczos and conjugate gradient algorithms

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Conceptual Questions for Review

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

MATH 235. Final ANSWERS May 5, 2015

Applied Linear Algebra

Numerical Analysis Lecture Notes

Introduction to Applied Linear Algebra with MATLAB

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Econ Slides from Lecture 7

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Notes on basis changes and matrix diagonalization

On an Inverse Problem for a Quadratic Eigenvalue Problem

Numerical Methods in Matrix Computations

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Eigenvalues and Eigenvectors A =

Index. for generalized eigenvalue problem, butterfly form, 211

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

MA 265 FINAL EXAM Fall 2012

1 Number Systems and Errors 1

Matrices, Moments and Quadrature, cont d

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

APPLIED NUMERICAL LINEAR ALGEBRA

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra: Characteristic Value Problem

Inverse eigenvalue problems involving multiple spectra

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

Introduction. Chapter One

ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM

Linear Algebra: Matrix Eigenvalue Problems

S.F. Xu (Department of Mathematics, Peking University, Beijing)

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

A Note on Eigenvalues of Perturbed Hermitian Matrices

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Introduction to Iterative Solvers of Linear Systems

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

LARGE SPARSE EIGENVALUE PROBLEMS

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Inverse Eigenvalue Problems: Theory, Algorithms, and Applications

Math Matrix Algebra

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

EECS 275 Matrix Computation

Review problems for MA 54, Fall 2004.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Bounds for Eigenvalues of Tridiagonal Symmetric Matrices Computed. by the LR Method. By Gene H. Golub

Linear algebra & Numerical Analysis

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB

1 Last time: least-squares problems

Iterative methods for Linear System

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

Eigenvalue and Eigenvector Homework

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Research Article A Test Matrix for an Inverse Eigenvalue Problem

Class notes: Approximation

Econ Slides from Lecture 8

Review of Linear Algebra

Inverse Eigenvalue Problems

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

On the Computations of Eigenvalues of the Fourth-order Sturm Liouville Problems

Eigenvalue and Eigenvector Problems

Spectral inequalities and equalities involving products of matrices

Eigenvalues and Eigenvectors

AMS526: Numerical Analysis I (Numerical Linear Algebra)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Cheat Sheet for MATH461

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Numerical Methods - Numerical Linear Algebra

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Quantum Computing Lecture 2. Review of Linear Algebra

18.06 Quiz 2 April 7, 2010 Professor Strang

Chapter 7 Iterative Techniques in Matrix Algebra

Solutions to Final Exam 2011 (Total: 100 pts)

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.

An Extrapolated Gauss-Seidel Iteration

CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Interlacing Inequalities for Totally Nonnegative Matrices

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

The Solution of Linear Systems AX = B

Linear Algebra Massoud Malek

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Parallel Singular Value Decomposition. Jiaxing Tan

G1110 & 852G1 Numerical Linear Algebra

Convergence of Eigenspaces in Kernel Principal Component Analysis

Large-scale eigenvalue problems

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

FEM and sparse linear system solving

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Direct methods for symmetric eigenvalue problems

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Transcription:

mathematics of computation volume 42. number 165 january 1984, pages 143 15 A Modified Method for Reconstructing Periodic Jacobi Matrices By Daniel Boley and Gene H. Golub* Abstract. In this note, we discuss the reconstruction of periodic Jacobi matrices from spectral data. The method combines ideas and techniques from the algorithms given by Boley and Golub [1], [2], and Ferguson [3], resulting in a numerically stable algorithm applicable to a larger class of problems. The number of initial data items needed for this method equals the number of items in the resulting matrix, namely 2n. 1. Introduction. In this note, we discuss the reconstruction of a periodic Jacobi matrix of the form (la) J = a, ->n~\ from the eigenvalue data using methods that are numerically stable. Here the a, and b are real, and b * for all /'. Such problems arise, for example, in the study of Toda Lattices [9], [1], and might also arise in the study of Sturm-Liouville and Hill's equations [3]. In particular, we are given the following initial information: the eigenvalues X, i = I,..., n of J, the eigenvalues ft,, / = 1,..., n - 1 of the submatrix / obtained from J by deleting the first row and column. We also need the product ß = bxb2 bn, or equivalently the eigenvalues X~, i = 1,..., n of the matrix *i by (lb) /- = ln-\ Jn-\ a.. Received October 5, 1982. 198 Mathematics Subject Classification. Primary 15A18, 65F15. * We gratefully acknowledge the support of the National Science Foundation under grant ECS-824468 for the first author and the Department of Energy under grant DE-AT3-76ER 713 for the second author. 143 51984 American Mathematical Society 25-5718/84 $1. + $.25 per page

144 DANIEL BOLEY AND GENE. H. GOLUB The techniques used are based on the methods described in [1] and [2]. In [3], it was pointed out that the method in [2] would fail on a certain class of matrices, and a method was given extending the collection of solvable problems to cover this class. The class in question can be described in terms of the interlacing property (2) A,<fl,<A,+ 1, /=1.B-l. If the inequalities are strict, then we say J satisfies the strict interlacing property. This last property is necessary for the algorithm in [2] but not for the one in [3]. In addition, unlike [2], the method in [3] does not require that the order of the system, n, be even. However the method in [3] starts with substantially different initial data and hence does not directly solve the problem posed in [2]. In this method, we need to assume only that the p be simple. This is equivalent to assuming that the off-diagonal elements of J be nonzero [8, p. 3]. In order to ensure that a solution exists in the real numbers, we also need to assume that the Xi satisfy (2), and that ß lies in an interval (, ßmax), which will be defined below. 2. Preliminaries. We need to define the following two bordered diagonal matrices A, A' [4] which appear as intermediate results in the course of the computation, and which are similar to J and /", respectively, by the same transformation: (3a) A = PT J 1 P A~ = 1 PT J 1 P where P is the orthogonal matrix of eigenvalues of the submatrix J. Hence A and J have the same characteristic polynomial, and so do /l" and J '. The matrices A, A', can be written in the form: (3b) A = M A' = (c-)j M where M = diag(fi,,..., ft _,) = PTJP, and the eigenvalues of A and A' are A, and a~, respectively. We shall later see how to define A~ using only ß instead of the X~. In the following, we will denote the / th row of P as pf, that is PT = [p,,..., p _,]. It is obvious that (4a) n- 1 au = trace(/4) - trace(m) = \, - ft,. i i By expanding the determinant, the characteristic polynomial of A may be written as (4b) detixl -A) = ix- au) U (* - ft,) - c j-\ k=\ n(a-my) J*k Setting X = ft,,..., (4c) c.2 = fi _, and solving the n - 1 equations for the c, yields n;_, (/*,-*,) n;:,u(ft,-ft,) >o. thus completely defining A up to the choice of signs of the c.

RECONSTRUCTING PERIODIC JACOBI MATRICES 145 If we let «I b\ b\ a2 'n-l 7-1 and define p(x), r(x) as the charactistic polynomials of the tridiagonal matrices 5" and /, respectively, then we may write det(./ - XI) = pix) - b2rix) - 2i-l)"ß, detij--xi)=pix)-b2riv + 2i-l)nß. Subtracting yields det(y- XI) = det(/t- XI) = det(a - XI) + 4(-l)". So we can solve for c~ as in (4c) to obtain (4d) (cd2 = n;_l(fi,-x;) + 4(-i)"/3 Hence we can define A', either with ß using (4d), or else by using (4c) with X~ substituted for X,. Note that (4d) defines the interval (, )8max) in which ß must he by the requirement that (c~)2 >. If we equate the first columns from Eq. (3a), we get (5a) [bx,,...,,bn]t = Pc, [bx,,...,,-b ]T = Pc-, in which the left-hand sides are (n - 1)-vectors. Subtracting yields [,...,,2bn]T which, applying PT to both sides, yields (5b) = 2bnen_x = Pic-e-), 2 > p _,=c-c-. Similarly, if we add the two parts of (5a) together, we get (5c) 2 >,p, = c + c". In (5b), (5c) the left-hand sides are determined up to sign by the requirement that IIPill = llpn- ill = 1- Note that we have a choice of signs in taking the square roots of the quantities defined in (4c), (4d), to obtain the c, c~. A different choice of signs will give rise to different vectors Pi,p _i and hence a different /. Thus, except in certain degenerate cases, there will be more than one matrix satisfying the initial eigenvalue conditions. 3. The Method. What have we now? We have p,, that is, the first row of the eigenvector matrix P for a tridiagonal matrix /. We also have the eigenvalues ft; of /, so J is completely determined (elementary consequence of Theorem 4.2 of Chapter 7

146 DANIEL BOLEY AND GENE. H. GOLUB of [6]). We can computey between the formulas (6a) (6b) an element at a time, and P a row at a time, by alternating y = pmpt, PTJ = MPT. The method that results from this is the Lanczos Algorithm [7], [8], and proceeds as follows: Lanczos Algorithm Start with p, an («- l)-vector with 2-norm 1, and M = diag(ft,.m,,-i), a diagonal matrix. Object: compute orthonormal vectors p2.p _ and an n - 1 X n - 1 tridiagonal matrix y = /, satisfying (6). 1. begin 2. /, <- tfa/p,; 3. for. «- 2,_n - 1 do 4. begin 5. z, ^ (if / = 2 then A/p, - p,/t elsea/p,., - p,,f,_, - P,-2-5,-2); 6. 5,-l ^ M*iTI2' 7. p,^z,/v_,; 8.., - pfmp,; 9. end 1. end. Step 5 is mathematically equivalent to saying z, «- A/p,_, Orthogonalize z, against all p, _,, 'n-2 S»1-2 1-2 (1-2 t'n-1 p, by Gram-Schmidt. What was actually implemented was the original step 5, followed by a complete Gram-Schmidt reorthogonalization. The Lanczos Algorithm is stable only if some reorthogonalization is used [11, Theorems 2.3, 2.4,2.5], and for simplicity we chose to use complete reorthogonalization. In step 7, Sj_t * is guaranteed from the assumption that the eigenvalues ft, are distinct [8, p. 3]. It must be noted that the tridiagonal matrix J may be constructed by using Householder Transformations instead of using the Lanczos Algorithm with complete reorthogonalization. The details are given in [4], and this method can be substituted

RECONSTRUCTING PERIODIC JACOBI MATRICES 147 for the Lanczos algorithm giving theoretically equivalent results. However, it was noted in [4] that the method using Householder Transformations is more expensive, but on badly conditioned problems it may be more stable, depending on the kind of reorthogonalization used [11]. On the other hand, the Lanczos Algorithm is much more suited for large sparse problems. Further comparison between the methods is necessary. The algorithm to reconstruct J is then as follows: Reconstruction Algorithm Compute then -lx/i-1 submatrix/(defined just below (la)): 1. Compute c, c" using (4c), (4d). 2. Compute bt and p, by (5c) subject to p, 2 = 1. 3. Apply the Lanczos Algorithm starting with vector p, and matrix M - diag(ft,,...,fi - ), obtaining vectors P and tridiagonal matrix /. Fill in the first row and column of y defined in (la): 4. Compute a, = L"Xt - EJT1 ft, (Eq. (4a)). 5. Compute bn by either (5b) or from the definition of ß: b = (*l ß K_x) 4. Numerical Results. A RATFOR program has been written to test this method. It successfully reconstructed the example suggested in [3], with n = 4,5,6, namely 1 1 1 for which the ft, are simple, but the X, are not. The initial data used in these test cases, as well as the vectors c, c", are listed below: n - 4, ß - 1 2. 2. 4. initial ft,.58578644 2. 1. 1. 1.41421356 5,-1.3819661.3819661 2.6183399 2.6183399 4. initial ft,.3819661 1.3819661 2.6183399 3.6183399 1.23191.7434967.7434967 1.23191

148 DANIEL BOLEY AND GENE. H. GOLUB n = 6,ß=l 1. 1. 3. 3. 4. initial fi,.26794919 1. 2. 3. 3.732581.5773527 1.154754.5773527 1. 1. The following example illustrates the fact that the bi need not be positive. This example is simply the J~, defined by (lb), corresponding to the J used in the above case n = 4. n = 4,ß = -l.58578644.58578644 initial u,.58578644 2. 1.41421356 1. 1. The above cases all have remarkable property that for all i, / = 1,..., n - I, either c, = or c~ =, so that the reconstructed matrix J is essentially unique (up to signs of the off-diagonal elements). In the following example, we show a more general case, in which the answer is not unique. The results were obtained by starting with one set of eigenvalue data and cycling through all possible sign combinations for the c,, c~. The initial data and the c vectors were: n = 4, ß =.25 2. 2. 4. initial ft,.58578644 2. 1. 1..866254.771678.866254 The 4 distinct matrices that were computed are given below. For brevity, we show each distinct matrix only once, since each one actually appeared 8 times. Since there were no degeneracies encountered in this example, and by observing that any matrix satisfying the initial conditions (given by X,, ft,, ß) must satisfy (4a)-(4d) and (5c), it follows that the following are all the possible distinct answers for the given initial data above. 2. 2. 2. 2. 1.366254 1.366254.366254.366254

RECONSTRUCTING PERIODIC JACOBI MATRICES 149 2. 2. 2. 2. a 2..77525513 2. 3.22474487 «i 2. 3.22474487 2..77525513.366254.366254 1.366254 1.366254 1..5.5 1. b, 1..5.5 1. As a final example, we tried the example 3 of Table 2 on page 1218 of [3], in which the matrices in question were reconstructed only with large errors. The matrices are defined as follows, for n = 5,1,15,2,25,3: a,=- - 2, i = l,...,n - 1, b = I - -, i= l,...,n-2, bn = b _x = l, an =. The algorithm was suitably modified to account for the fact that the submatrix J in this example was obtained by deleting the last row and column rather than the first. Unfortunately, there are too many combinations of signs for the c cj to compute all the possible matrices. For n = 1 alone there are 262144 different choices of signs. However, in each case we are able to compare the eigenvalues of the reconstructed matrix with the initial eigenvalues. Since the eigenvalues for symmetric matrices are always well-conditioned, the discrepancy between the two sets of eigenvalues is a good indicator of the error in the reconstructed matrices. n Norm of discrepancy 5.364539663e- 15 1.55857184e- 15 15.1329552e- 14 2.191718261e- 14 25.343744e- 14 3.3472165e- 14

15 DANIEL BOLEY AND GENE. H. GOLUB Here, the norm of discrepancy is the square root of the sum of the squares of the differences between the two sets of eigenvalues. Computer Science Department Institute of Technology University of Minnesota Minneapolis, Minnesota 55455 Computer Science Department Stanford University Stanford, California 9435 1. D. Boley & G. H. GOLUB, Inverse Eigenvalue Problems for Band Matrices, Lecture Notes in Mathematics. Numerical Analysis (Dundee. 1977), Springer-Verlag, Berlin and New York, 1977. 2. D. Boley & G. H. GOLUB, 77ie Matrix Inverse Eigenvalue Problem for Periodic Matrices, invited paper at Fourth Conference on Basic Problems of Numerical Anaysis (LIBLICE IV), Pilsin. Czechoslovakia, Sept. 1978. 3. W. Ferguson Jr., "The construction of Jacobi and periodic Jacobi matrices with prescribed spectra," Math Comp., v. 35, 198, pp. 123-122. 4. F. W. BlEGLER - KÖNIG, "Construction of band matrices from spectral data." Linear Algebra Appi, v.4. 1981, pp. 79-84. 5. John Thompson, personal communication. 6. G. W. Stewart, Introduction to Matrix Computations, Academic Press, New York, 1973. 7. C. Lanczos, "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators," J. Res. Nat. Bur. Standards, v. 45, 195, pp. 255-282. 8. J. H. Wilkinson. The Algebraic Eigenvalue Problem. Clarendon Press. Oxford. 1965. 9. H. Flaschka, "The Toda lattice," Phys. Rev. B. v. 9, 1974, pp. 1924-1925. 1. H. Flaschka & D. W. McLaughlin, "Canonically conjugate variables for the Kortweg-de Vries equation and the Toda lattice with periodic boundary conditions," Progr. Theoret. Phys., v. 55, 1976, pp. 438-456. H.H. Simon. The Lanczos Algorithm for Solving Symmetric Linear Systems, Ph.D Thesis. Center for Pure and Applied Mathematics. Univ. of Calif.. Berkeley report no. PAM-74, April 1982.