Evaluation of small elements of the eigenvectors of certain symmetric tridiagonal matrices with high relative accuracy

Similar documents
Detailed analysis of prolate quadratures and interpolation formulas

Non-asymptotic Analysis of Bandlimited Functions. Andrei Osipov Research Report YALEU/DCS/TR-1449 Yale University January 12, 2012

Foundations of Matrix Analysis

TEST CODE: MMA (Objective type) 2015 SYLLABUS

The Solution of Linear Systems AX = B

A Randomized Algorithm for the Approximation of Matrices

Recurrence Relations and Fast Algorithms

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions

AIMS Exercise Set # 1

A linear algebra proof of the fundamental theorem of algebra

Class notes: Approximation

A linear algebra proof of the fundamental theorem of algebra

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

FIXED POINT ITERATIONS

Examples and MatLab. Vector and Matrix Material. Matrix Addition R = A + B. Matrix Equality A = B. Matrix Multiplication R = A * B.

A matrix over a field F is a rectangular array of elements from F. The symbol

The Lanczos and conjugate gradient algorithms

Math 577 Assignment 7

Legendre s Equation. PHYS Southern Illinois University. October 18, 2016

1 Matrices and Systems of Linear Equations

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

JACOBI S ITERATION METHOD

TEST CODE: MIII (Objective type) 2010 SYLLABUS

ELEMENTARY LINEAR ALGEBRA

THE EIGENVALUE PROBLEM

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.

Linear Algebra: Matrix Eigenvalue Problems

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Chapter 4 Euclid Space

Prime Numbers and Irrational Numbers

Numerical Analysis: Solving Systems of Linear Equations

Lecture 7. Econ August 18

Algebra C Numerical Linear Algebra Sample Exam Problems

Matrix Theory. A.Holst, V.Ufnarovski

Preliminary algebra. Polynomial equations. and three real roots altogether. Continue an investigation of its properties as follows.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Applied Numerical Linear Algebra. Lecture 8

Fiedler s Theorems on Nodal Domains

G1110 & 852G1 Numerical Linear Algebra

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Lecture 5 Singular value decomposition

= ϕ r cos θ. 0 cos ξ sin ξ and sin ξ cos ξ. sin ξ 0 cos ξ

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

Chapter 7 Iterative Techniques in Matrix Algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

A Tridiagonal Matrix

Eigenspaces in Recursive Sequences

A Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets

Scientific Computing: An Introductory Survey

Principal Component Analysis

Fiedler s Theorems on Nodal Domains

Iterative solvers for linear equations

1 (2n)! (-1)n (θ) 2n

Arithmetic progressions in sumsets

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Numerical Methods - Numerical Linear Algebra

ELEMENTARY LINEAR ALGEBRA

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

MS 3011 Exercises. December 11, 2013

Eigenvalues and Eigenvectors

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Interlacing Inequalities for Totally Nonnegative Matrices

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

MTH 2032 SemesterII

EECS 275 Matrix Computation

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra.

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Absolute value equations

Generalized eigenvector - Wikipedia, the free encyclopedia

REVIEW OF DIFFERENTIAL CALCULUS

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Chapter 8. P-adic numbers. 8.1 Absolute values

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Lecture notes: Applied linear algebra Part 1. Version 2

Classical Mechanics. Luis Anchordoqui

Triangular matrices and biorthogonal ensembles

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

3 + 4i 2 + 3i. 3 4i Fig 1b

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Introduction. Chapter One

Convex Functions and Optimization

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Properties of Linear Transformations from R n to R m

Transcription:

Evaluation of the eigenvectors of symmetric tridiagonal matrices is one of the most basic tasks innumerical linear algebra. Itisawidely known factthat, inthe caseofwell separated eigenvalues, the eigenvectors can be evaluated with high relative accuracy. Nevertheless, in general, each coordinate of the eigenvector is evaluated with only high absolute accuracy. In particular, those coordinates whose magnitude is below the machine precision are not expected to be evaluated to any correct digit at all. In this paper, we address the problem of evaluating small e.g. 10 50 ) coordinates of the eigenvectors of certain symmetric tridiagonal matrices with high relative accuracy. We propose a numerical algorithm to solve this problem, and carry out error analysis. Also, we discuss some applications in which this task is necessary. Our results are illustrated via several numerical experiments. Evaluation of small elements of the eigenvectors of certain symmetric tridiagonal matrices with high relative accuracy Andrei Osipov Research Report YALEU/DCS/TR-1460 Yale University December 13, 01 Approved for public release: distribution is unlimited. Keywords: symmetric tridiagonal matrices, eigenvectors, small elements, high accuracy 1

Contents 1 Introduction 3 Overview 5 3 Mathematical and Numerical Preliminaries 6 3.1 Real Symmetric Matrices............................ 7 3. Bessel Functions................................. 8 3.3 Prolate Spheroidal Wave Functions....................... 9 3.4 Numerical Tools................................. 11 3.4.1 Power and Inverse Power Methods................... 11 3.4. Sturm Sequence............................. 1 3.4.3 Evaluation of Bessel Functions..................... 13 4 Analytical Apparatus 14 4.1 Properties of Certain Tridiagonal Matrices................... 14 4. Local Properties of Eigenvectors........................ 18 4.3 Error Analysis for Certain Matrices....................... 36 5 Numerical Algorithms 4 5.1 Problem Settings................................. 43 5. Informal Description of the Algorithm..................... 44 5.3 Detailed Description of the Algorithm..................... 45 5.3.1 The region of growth: x 1,...,x k+1................... 45 5.3. The leftmost part of the oscillatory region: x k+,...,x k+l 1..... 46 5.3.3 Up to the middle of the oscillatory region: x k+l,...,x p+1...... 46 5.3.4 The region of decay: ˆx m,...,ˆx n..................... 46 5.3.5 The rightmost part of the oscillatory region: ˆx m r ),...,ˆx m 1.. 47 5.3.6 Down to the middle of the oscillatory region: ˆx p,...,ˆx m r 1)... 47 5.3.7 Gluing x 1,...,x p,x p+1 and ˆx p,ˆx p+1,...,ˆx n together......... 48 5.3.8 Normalization............................... 48 5.4 Short Description of the Algorithm....................... 50 5.5 Error Analysis.................................. 50 5.6 Related Algorithms................................ 53 5.6.1 A Simplified Algorithm......................... 54 5.6. Inverse Power............................... 54 5.6.3 Jacobi Rotations............................. 55 5.6.4 Gaussian Elimination.......................... 56 6 Applications 56 6.1 Bessel Functions................................. 56 6. Prolate Spheroidal Wave Functions....................... 57

7 Numerical Results 58 7.1 Experiment 1.................................... 58 7. Experiment.................................... 61 7.3 Experiment 3.................................... 63 1 Introduction The evaluation of eigenvectors of symmetric tridiagonal matrices is one of the most basic tasks in numerical linear algebra see, for example, such classical texts as [3], [4], [5], [6], [8], [9], [16], [19], [0]). Several algorithms to perform this task have been developed; these include Power and Inverse Power methods, Jacobi Rotations, QR and QL algorithms, to mention just a few. Many of these algorithms have become standard and widely known tools. In the case when the eigenvalues of the matrix in question are well separated, most of these algorithms will evaluate the corresponding eigenvectors to a high relative accuracy. More specifically, suppose that n > 0 is an integer, that v R n is the vector to be evaluated, and ˆv R n is its numerical approximation, produced by one of the standard algorithms. Then, v ˆv v ε, 1) where denotes the Euclidean norm, and ε is the machine precision e.g. ε 10 16 for double precision calculations). However, a closer look at 1) reveals that it only guarantees that the coordinates of v be evaluated to high absolute accuracy. This is due to the following trivial observation. Suppose that we add ε v to the first coordinate ˆv 1 of ˆv. Then, the perturbed ˆv will not violate 1). On the other hand, the relative accuracy of ˆv 1 can be as large as v 1 +ε v v 1 v 1 = ε v v 1. ) In particular, if, say, v = 1 and v 1 < ε, then ˆv 1 is not guaranteed to approximate v 1 to any correct digit at all! Sometimes the poor relative accuracy of small coordinates is of no concern; for example, thisisusuallythecasewhenv isonlyusedtoprojectothervectorsontoit. Nevertheless, in several prominent problems, small coordinates of the eigenvector are required to be evaluated to high relative accuracy. These problems include the evaluation of Bessel functions see Sections 3., 3.4.3, 6.1), and the evaluation of some quantities associated with prolate spheroidal wave functions see Sections 3.3, 6., and also [14], [15]), among others. In this paper, we propose an algorithm for the evaluation of the coordinates of eigenvectors of certain symmetric tridiagonal matrices, to high relative accuracy. As the running example, we consider the matrices whose non-zero off-diagonal elements are constant and equal to one), and the diagonal elements constitute a monotonically increasing sequencesee Definition 1). The connection of such matrices to Bessel functions and prolate spheroidal wave functions is discussed in Sections 3.4.3, 6., respectively. Also, we carry out detailed 3

error analysis of our algorithm. While our algorithm can be viewed as a modification of already existing and well known) algorithms, such error analysis, perhaps surprisingly, appears to be new. In addition, we conduct several numerical experiments, to both illustrate the performance of our algorithm, and to compare it to some classical algorithms. The following is the principal analytical result of this paper. Theorem 1. Suppose that a 1 and c 1 are real numbers, and that n > 0 is an integer. Suppose also that A 1,...,A n are defined via the formula ) j a A j = +, 3) c for every j = 1,...,n, and that the n by n symmetric tridiagonal matrix A is defined via the formula A 1 1 A = 1 A 1 1 A 3 1......... 1 A n 1 1 1 A n Suppose furthermore that λ is an eigenvalue of A, and that. 4) λ > +A k, 5) for some integer 1 k n. Suppose, in addition, that X = X 1,...,X n ) R n is a λ eigenvector of A. If the entries of A are defined to the relative precision ε > 0, then the first k coordinates X 1,...,X k of X are defined to the relative precision Rc,a), where Rc,a) ε O c 4a/a+)), c. 6) Remark 1. We observe that, according to 6), the relative precision of X 1,...,X k does not depend on their order of magnitude. In other words, even if, say, X 1 is significantly smaller than ε, it is still defined to fairly high relative precision. Remark. The definition of the entries of the matrix A in Theorem 1 is motivated by particular applications see Section 6). On the other hand, Theorem 1 and Remark 1 generalize to a much wider class of matrices; these include, for example, perturbations of A, defined via 4), as well as matrices, whose diagonal entries are of a more general form than 3). While such generalizations are straightforward and are based, in part, on the results of Section 4.), the analysis is somewhat long, and will be published at a later date. The proof of Theorem 1 is constructive and somewhat technical see Section 4, in particluar, Section 4.3). The resulting numerical algorithm for the evaluation of the eigenvector X is described in Sections 5., 5.3, 5.4, and Section 5.5 contains its error analysis. 4

The paper is organized as follows. Section contains a brief informal overview of the principle algorithm of this paper. In Section 3, we summarize a number of well known mathematical and numerical facts to be used in the rest of this paper. In Section 4, we introduce the necessary analytical apparatus and carry out the analysis. In Section 5, we introduce the principal numerical algorithm of this paper, and perform its error analysis; we also describe several other related algorithms. In Section 6, we discuss some applications of our algorithm to other computational problems. In Section 7, we illustrate the performance of our algorithm via several numerical examples, and compare it to some related classical algorithms. Overview In this section, we overview the intuition behind the principal algorithm of this paper and the corresponding error analysis. Suppose that A is a symmetric tridiagonal matrix, whose non-zero off-diagonal elements are constant and equal to one), and the diagonal elements constitute a monotonically increasing sequence A 1 < A <... see Definition 1). Then, the coordinates of any eigenvector of A satisfy a certain three-term linear recurrence relation with non-constant coefficients, that depend on the corresponding eigenvalue λ see Theorem 7). More specifically, x k+1 λ A k ) x k +x k 1 = 0, 7) for every k =,...,n 1, where x = x 1,...,x n ) R n is the eigenvector corresponding to the eigenvalue λ. It turns out that the qualitative properties of the recurrence relation 7) depend on whether λ A k is greater than, less than, or between and. Both our algorithm and the subsequent error analysis are based on the following three fairly simple observations. Observation 1 growth ). Suppose that B > and x 1,x,x 3 are real numbers, and that x 3 B x +x 1 = 0. 8) If 0 < x 1 < x, then the evaluation of x 3 from x 1,x via 8) is stable accurate); moreover, x 3 > x. Ontheotherhand, theevaluationofx 1 fromx,x 3 via8)isunstableinaccurate), since, loosely speaking, we attempt to evaluate a small number as a difference of two bigger positive numbers. Remark 3. This intuition is generalized and developed in Theorem 11, Corollary 3, Theorem 1, and serves a basis for the step of the principal algorithm, described in Section 5.3.1 see also Observation 1 in Section 5.5). Observation decay ). Suppose now that B < and y 1,y,y 3 are real numbers, and that y 3 B y +y 1 = 0. 9) If y 3 and y have opposite signs, and y > y 3, then the evaluation of y 1 from y,y 3 via 9) is stable accurate); moreover, y 1 and y have opposite signs, and y 1 > y. On the other 5

hand, the evaluation of y 3 from y 1,y 9) is unstable inaccurate), since, loosely speaking, we attempt to obtain a small number as a sum of two numbers of opposite signs of larger magnitude. Remark 4. This intuition is generalized and developed in Theorem 13, and serves a basis for the step of the principal algorithm, described in Section 5.3.4 see also Observation 3 in Section 5.5). Observation 3 oscillations ). Consider the following example. Suppose that a > 0 is a real number, and that the real numbers x 0,x 1,x,..., are defined via the formula for every k = 0,1,...,. We recall the trigonometric identity x k = cosk a), 10) cosk +1) a)+cosk 1) a) = cosa) cosk a), 11) that holds for all real a,k, and substitute 10) into 11) to conclude that x k+1 = cosa) x k x k 1, 1) for every k = 1,,... Obviously, the sequence x 0,x 1,... contains elements of order one as well as relatively small elements - the latter ones are obtained as a difference of two larger elements, potentially resulting in a loss of accuracy, and complicating the error analysis of the recurrence relation 1). Consider, however, the complex numbers z 0,z 1,z,..., defined via the formula z k = e ika, 13) for every k = 0,1,... We combine 10) and 13) to conclude that for every k = 0,1,,... Moreover, x k = Rz k ), 14) z k+1 = e ia z k, 15) for every k = 0,1,... The stability of the recurrence relation 15) is obvious if its elements are stored in polar coordinates, each subsequent element is obtained from its predecessor via rotating the latter by the angle a; also, the absolute values of z 0,z 1,... are all the same, in sharp contrast to those of x 0,x 1,...). Moreover, even at first glance, 15) seems to be easier to analyze than 1). Remark 5. This intuition is generalized and developed in Theorems 14, 15, 16, 17, Corollaries 4, 5, Theorems 3, 4, Corollaries 7, 8, and serves a basis for the steps of the principal algorithm, described in Sections 5.3.3, 5.3.6 see also Observations 5-10 in Section 5.5). 3 Mathematical and Numerical Preliminaries In this section, we introduce notation and summarize several facts to be used in the rest of the paper. 6

3.1 Real Symmetric Matrices In this subsection, we summarize several well known facts about certain symmetric tridiagonal matrices. In the following theorem, we describe a well known characterization of the eigenvalues of a symmetric matrix. Theorem Courant-Fisher Theorem). Suppose that n > 0 is a positive integer, and that A is an n by n real symmetric matrix. Suppose also that are the eigenvalues of A, and 1 k n is an integer. Then, σ 1 σ σ n 16) σ k = max { min { x T Ax : x U, x = 1 } : U R n, dimu) = k }, 17) for every k = 1,...,n. Also, σ k = min { max { x T Ax : x V, x = 1 } : V R n, dimv) = n k +1 }, 18) for every k = 1,...,n. The following theorem is an immediate consequence of Theorem. Theorem 3. Suppose that n > 0 is an integer, and A,B are symmetric positive definite n by n matrices. Suppose also that are the eigenvalues of A, and that α 1 α α n 19) β 1 β β n 0) are the eigenvalues of B. Suppose furthermore that A B is positive definite. Then, for every integer k = 1,...,n. β k α k, 1) In the following theorem, we describe the eigenvalues and eigenvectors of a very simple symmetric tridiagonal matrix. Theorem 4. Suppose that n > 1 is a positive integer, and B is the n by n symmetric tridiagonal matrix, defined via the formula 1 1 1 1 1 B =.......... ) 1 1 1 7

In other words, all the diagonal entries of B are equal to, and all the super- and subdiagonal entries of B are equal to 1. Suppose also that σ 1 σ σ n 1 σ n 3) are the eigenvalues of B, and v 1,...,v n R n are the corresponding unit length eigenvectors with positive first coordinate. Then, ) ) πk σ k = cos +1, 4) n+1 for each k = 1,...,n. Also, v k = 1 n for each k = 1,...,n. sin ) ) )) πk πk nπk,sin,...,sin, 5) n+1 n+1 n+1 Remark 6. The spectral gap of the matrix B, defined via ), is ) ) ) 1 σ 1 σ = 4 sin π 3 sin n+1 π 1 = O n+1 n, 6) due to 4) above. Also, the condition number of B is κb) = σ 1 σ n = 1+cosπ/n+1)) 1 cosπ/n+1)) = On ). 7) 3. Bessel Functions In this section, we describe some well known properties of Bessel functions. All of these properties can be found, for example, in [1], [7]. Suppose that n 0 is a non-negative integer. The Bessel function of the first kind J n : C C is defined via the formula J n z) = m=0 1) m z ) m+n, 8) m! m+n)! for all complex z. Also, the function J n : C C is defined via the formula J n z) = 1) n J n z), 9) for all complex z. The Bessel functions J 0,J ±1,J ±,... satisfy the three-term recurrence relation z J n 1 z)+z J n+1 z) = n J n z), 30) for any complex z and every integer n. In addition, Jnx) = 1, 31) n= for all real x. In the following theorem, we rewrite 30), 31) in the matrix form. 8

Theorem 5. Suppose that x > 0 is a real number, and that the entries of the infinite tridiagonal symmetric matrix Ax) are defined via the formulae for all integer n, and A n,n 1 x) = A n,n+1 x) = 1, 3) A n,n x) = n x, 33) for every integer n. Suppose also that the coordinates of the infinite vector vx) are defined via the formula v n x) = J n x), 34) for every integer n. Then, v is a unit vector in the l sense), and, moreover, where 0 stands for the infinite dimensional zero vector. 3.3 Prolate Spheroidal Wave Functions Ax) vx) = 0, 35) In this section, we summarize several well known facts about prolate spheroidal wave functions. Unless stated otherwise, all these facts can be found in [1], [17], [11], [18], [10]. Suppose that c > 0 is a real number. The prolate spheroidal wave functions PSWFs) corresponding to the band limit c are the unit-norm eigenfunctions ψ c) 0,ψc) 1,... of the integral operator F c : L [ 1,1] [ 1,1], defined via the formula F c [ϕ]x) = 1 1 ϕt) e icxt dt. 36) The numerical algorithm for the evaluation of PSWFs, described in [1], is based on the following facts. Suppose that P 0,P 1,... are the Legendre polynomials see, for example, [1], [7]). For every real c > 0 and every integer n 0, the prolate spheroidal wave function ψ c) n can be expanded into the series of Legendre polynomials ψ n x) = k=0 β n,c) k P k x) k +1/, 37) for all 1 x 1, where β n,c) 0,β n,c) 1,... are defined via the formula β n,c) k = for every k = 0,1,,... Moreover, β n,c) 0 1 1 ) + ψ n x) P k x) k +1/ dx, 38) β n,c) 1 ) + β n,c) ) + = 1. 39) 9

Suppose also that the non-zero entries of the infinite symmetric matrix A c) are defined via the formulae A c) kk +1) 1 k,k = kk +1)+ k +3)k 1) c, A c) k,k+ = Ac) k+,k = k +)k +1) k +3) k +1)k +5) c, 40) for every k = 0,1,,... The matrix A c) is not tridiagonal; however, it naturally splits into two symmetric tridiagonal infinite matrices A c,even and A c,odd, defined, respectively, via A c,even = A c) 0,0 A c) 0, A c),0 A c), A c),4 A c) 4, A c) 4,4 A c) 4,6......... 41) and A c,odd = A c) 1,1 A c) 1,3 A c) 3,1 A c) 3,3 A c) 3,5 A c) 5,3 A c) 5,5 A c) 5,7. 4)......... Suppose that χ c) 0 < χ c) <... are the eigenvalues of A c,even, and χ c) 1 < χ c) 3 <... are the eigenvalues of A c,odd. If n is even, then ) A c,even β n,c) 0,β n,c) T,... = χ c) n β n,c) 0,β n,c) T,...), 43) and β n,c) k = 0 for every odd k. If n is odd, then A c,odd β n,c) 1,β n,c) 3,... ) T = χ c) n β n,c) 1,β n,c) 3,...) T, 44) and β n,c) k = 0 for every even k. The eigenvectors in 43), 44) have unit length, due to 39). The algorithm for the evaluation of ψ n c) starts with computing the corresponding eigenvector of A c,even, if n is even, or of A c,odd, if n of odd. In practice, these matrices are replaced with their N N upper left square submatrices, where N > 0 is a sufficiently large integer see e.g. [1] for further details). The coordinates of this eigenvector are the coefficients of the expansion of ψ c) n into the Legendre series 37) see also 43), 44)). Once this coefficients are precomputed, we evaluate ψ c) n x) for any given 1 x 1 via evaluating the sum N 1 k=0 in ON) operations see e.g. [1], [14], [15] for further details). β n,c) k P k x) k +1/, 45) 10

3.4 Numerical Tools In this subsection, we summarize several numerical techniques to be used in this paper. 3.4.1 Power and Inverse Power Methods The methods described in this subsection are widely known and can be found, for example, in [3], [6]. Suppose that A is an n n real symmetric matrix, whose eigenvalues satisfy σ 1 > σ σ 3 σ n. 46) The Power Method evaluates σ 1 and the corresponding unit eigenvector in the following way. Set v 0 to be a random vector in R n such that v 0 = v0 Tv 0 = 1. Set j = 1 and η 0 = 0. Compute ˆv j = Av j 1. Set η j = v T j 1ˆv j. Set v j = ˆv j / ˆv j. If η j η j 1 is sufficiently small, stop. Otherwise, set j = j +1 and repeat the iteration. The output value η j approximates σ 1, and v j approximates a unit eigenvector corresponding to σ 1. The cost of each iteration is dominated by the cost of evaluating Av j 1. The rate of convergence of the algorithm is linear, and equals and the error after j iterations is of order σ / σ 1 ) j. Remark 7. In this paper, we use the modification of this algorithm, in which i and η j are evaluated via the formulae i = argmax{ v j 1 k) : k = 1,...,n}, η j = ˆv ji) v j 1 i). 47) The Inverse Power Method evaluates the eigenvalue σ k of A and a corresponding unit eigenvector, given an initial approximation σ of σ k that satisfies the inequality σ σ k < max{ σ σ j : j k}. 48) Conceptually, the Inverse Power Method is an application of the Power Method on the matrix B = A σi) 1. In practice, B does not have to be evaluated explicitly, and it suffices to be able to solve the linear system of equations for the unknown ˆv j, on each iteration of the algorithm. A σi)ˆv j = v j 1, 49) Remark 8. If the matrix A is tridiagonal, the system 49) can be solved in On) operations, for example, by means of Gaussian elimination or QR decomposition see e.g. [3], [6], [0]). 11

3.4. Sturm Sequence The following theorem can be found, for example, in [0] see also []). It provides the basis for an algorithm for evaluating the kth smallest eigenvalue of a symmetric tridiagonal matrix. Theorem 6 Sturm sequence). Suppose that a 1 b 0 0 b a b 3 0 0 C =.............. 0 0 b n 1 a n 1 b n 0 0 b n a n is a symmetric tridiagonal matrix such that none of b,...,b n is zero. Then, its n eigenvalues satisfy 50) σ 1 C) < < σ n C). 51) Suppose also that C k is the k k leading principal submatrix of C, for every integer k = 1,...,n. We define the polynomials p 1,p 0,...,p n via the formulae and p 1 x) = 0, p 0 x) = 1 5) p k x) = detc k xi k ), 53) for every k =,...,n. In other words, p k is the characteristic polynomials of C k, for every k = 1,...,n. Then, p k x) = a k x)p k 1 x) b k p k x), 54) for every integer k = 1,,...,n. Suppose furthermore, that, for any real number σ, the integer Aσ) is defined to be the number of agreements of sign of consecutive elements of the sequence p 0 σ),p 1 σ),...,p n σ), 55) where the sign of p k σ) is taken to be opposite to the sign of p k 1 σ) if p k σ) is zero. In other words, n Aσ) = agreep k 1 σ),p k σ)), 56) k=1 where, for any pair of real numbers a,b, the integer agreea,b) is defined via 1 if ab > 0, 1 if a = 0, b 0, agreea, b) = 0 if b = 0, 0 if ab < 0. Then, the number of eigenvalues of C that are strictly larger than σ is precisely Aσ). 57) 1

Corollary 1 Sturm bisection). The eigenvalue σ k C) of 50) can be found by means of bisection, each iteration of which costs On) operations. Proof. We initialize the bisection by choosing x 0 < σ k C) < y 0. Then we set j = 0 and iterate as follows. Set z j = x j +y j )/. If y j x j is small enough, stop and return z j. Compute A j = Az j ) using 54) and 55). If A j k, set x j+1 = z j and y j+1 = y j. If A j < k, set x j+1 = x j and y j+1 = z j. Increase j by one and go to the first step. In the end σ k C) z j is at most y j x j. The cost of the algorithm is due to 54) and the definition of Aσ). 3.4.3 Evaluation of Bessel Functions The following numerical algorithm for the evaluation of Bessel functions see Section 6.1) at a given point is based on Theorem 5 see e.g [1]). Suppose that x > 0 is a real number, and that n > 0 is an integer. The following numerical algorithm evaluates J 0 x),j ±1,...,J ±n x). Algorithm: evaluation of J 0 x),j ±1,...,J ±n x). select integer N > n such N is also greater than x). set J N = 1 and J N+1 = 0. evaluate J N 1, J N,..., J 1 iteratively via the recurrence relation 30), in the direction of decreasing indices. In other words, for every k = N,...,. evaluate J 1 from J 1 via 9). evaluate J 0 from J 1, J 1 via 30). J k 1 = k x J k x) J k+1 x), 58) evaluate d via the formula d = J 0 + N k=1 J k. 59) return J 0 /d, J 1 /d,..., J n /d. 13

Discussion of such properties of this algorithms as the accuracy to which J 0 /d, J1 /d,..., J n /d approximate J 0 x),j 1 x),...,j n x), or the choice of N, are beyond the scope of this paper see, however, Section 7). Nevertheless, we make the following remark about this algorithm. Remark 9. Suppose that x > 0 is a real number, that 0 < n < N are integers, and that the real numbers J 0,..., J N and d are those of the algorithm above see 58), 59)). Suppose also that A is the symmetric tridiagonal N +1 by N +1 matrix, whose non-zero entries are defined via the formulae for every j = 1,...,N, and A j,j+1 = A j+1,j = 1, 60) A j,j = A j = + j x, 61) for every j = 1,...,N,N+1. Then, in exact arithmetics, the vector X = X 1,...,X N+1 ) R N+1, defined via X = 1 JN d,..., J 1, J 0, J 1,..., 1) N J ) N, 6) is the unit-length eigenvector of A, corresponding to the eigenvalue λ, defined via In particular, X 1 > 0. λ = + N +1). 63) x 4 Analytical Apparatus The purpose of this section is to provide the analytical apparatus to be used in the rest of the paper. 4.1 Properties of Certain Tridiagonal Matrices In this subsection, we study some general properties of the matrices of the following type. Definition 1. Suppose that n > 1 is a positive integer, and 0 < f 1 < f <... is an increasing sequence of positive real numbers. We define the symmetric tridiagonal n by n matrix A = Af) via the formula A 1 1 A = 1 A 1 1 A 3 1......... 1 A n 1 1 1 A n, 64) 14

where, for every integer k > 0, the real number A k is defined via the formula A k = +f k. 65) In the following theorem, we describe some properties of the eigenvectors of the matrix A of Definition 1. Theorem 7. Suppose that n > 1 is an integer, 0 < f 1 < f <... is an increasing sequence of positive real numbers, and the symmetric tridiagonal n by n matrix A is that of Definition 1. Suppose also that the real number λ is an eigenvalue of A, and x = x 1,...,x n ) R n is an eigenvector corresponding to λ. Then, Also, Finally, x = λ A 1 ) x 1. 66) x 3 = λ A ) x x 1,... x k+1 = λ A k ) x k x k 1,... x n = λ A n 1 ) x n 1 x n. 67) x n 1 = λ A n ) x n. 68) In particular, both x 1 and x n differ from zero, and all eigenvalues of A are simple. Proof. The identities 66), 67), 68) follow immediately from Definition 1 and the fact that Ax = λx. 69) To prove that the eigenvalues are simple, we observe that the coordinates x,...,x n of any eigenvector corresponding to λ are completely determined by x 1 via 66), 67) or, alternatively, via68), 67)). Obviously, neitherx 1 norx n canbeequaltozero, forotherwise x would be the zero vector. The following theorem follows directly from Theorem 7 and Theorem 6 in Section 3.4.. Theorem 8. Suppose that n > 1 is an integer, 0 < f 1 < f <... is an increasing sequence of positive real numbers, and the symmetric tridiagonal n by n matrix A is that of Definition 1. Suppose also that 1 k n is an integer, that λ k is the kth smallest eigenvalue of A, and that x = x 1,...,x n ) R n is an eigenvector corresponding to λ k. Then, n 1 agreex j,x j+1 ) = k 1, 70) j=1 where agree is defined via 57) in Theorem 6. 15

Proof. We define the n by n matrix C via the formula C = λ k I A. 71) Suppose that, for any real t, the sequence p 0 t),...,p n t) is the Sturm sequence, defined via 55) in Theorem 6. Suppose also that x = x 1,...,x n ) is the eigenvector of A corresponding to λ k, and that x 1 = 1. 7) We combine 7) with 66), 67) of Theorem 7 and 5), 54) of Theorem 6 to conclude that p 0 0) = x 1, p 1 0) = x,..., p n 1 0) = x n. 73) Moreover, due to the combination of 68) and 54), p n 0) = 0. 74) We observe that the matrix C, defined via 71), has k 1 positive eigenvalues, and combine this observation with 56), 57) of Theorem 6 and 73), 74) to obtain 70). Corollary. Suppose that n > 1 is an integer, 0 < f 1 < f <... is an increasing sequence of positive real numbers, and the symmetric tridiagonal n by n matrix A is that of Definition 1. Suppose also that λ 1 and λ n are, respectively, the minimal and maximal eigenvalues of A. Then, and λ 1 < A 1, 75) λ n > A n. 76) Proof. Suppose that x = x 1,...,x n ) is an eigenvector corresponding to λ 1. Due to the combination of Theorem 7 and Theorem 8, x 1 x < 0. 77) We combine 77) with 66) to obtain 75). On the other hand, due to the combination of Theorem 7 and Theorem 8, x n 1 x n > 0. 78) We combine 78) with 66) to obtain 76). The following theorem is a direct consequence of Theorems 3, 4 in Section 3.1. 16

Theorem 9. Suppose that n > 1 is an integer, 0 < f 1 < f <... is an increasing sequence of positive real numbers, and the symmetric tridiagonal n by n matrix A is that of Definition 1. Suppose also that are the eigenvalues of A. Then, λ k > λ 1 < λ < < λ n 79) )) πk 1 cos, 80) n+1 for every integer k = 1,...,n. In particular, A is positive definite. Also, for every k = 1,...,n. λ k > A k, 81) Proof. The inequalities 80), 81) follow from the combination of Theorems 3, 4 in Section 3.1 and Definition 1. In the following theorem, we provide an upper bound on each eigenvalue of A. Theorem 10. Suppose that n > 1 is an integer, 0 < f 1 < f <... is an increasing sequence of positive real numbers, and the symmetric tridiagonal n by n matrix A is that of Definition 1. Suppose also that 1 k n is an integer, and that λ k is the kth smallest eigenvalue of A. Then, λ k +A k. 8) Proof. Suppose, by contradiction, that λ k > +A k. 83) Suppose that x = x 1,...,x n ) R n is an eigenvector of A corresponding to λ k. Suppose also that It follows from 83) that x 1 > 0. 84) λ k A 1 > λ k A > > λ k A k >. 85) We combine 66), 67) in Theorem 7 with 84), 85) to conclude that x 1 > 0, x > 0,..., x k+1 > 0. 86) We combine 86) with 57) in Theorem 6 in Section 3.4. to conclude that in contradiction to Theorem 8. n 1 agreex j,x j+1 ) k, 87) j=1 17

4. Local Properties of Eigenvectors Throughout this subsection, n > 0 is an integer, and 0 < f 1 < f <... is an increasing sequence of positive real numbers. We will be considering the corresponding symmetric tridiagonal n by n matrix A = Af) see Definition 1 in Section 4.1). In the following theorem, we assert that, under certain conditions, the first element of the eigenvectors of A must be small. Theorem 11. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1. Suppose also that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector whose first coordinate is positive, i.e. x 1 > 0. Suppose, in addition, that 1 k n is an integer, and that Then, Also, λ A k +. 88) 0 < x 1 < x < < x k < x k+1. 89) x j > λ A j x j 1 for every j =,...,k. In addition, λ Aj ) + 1, 90) 1 < x k x k 1 < < x 3 x < x x 1. 91) Proof. It follows from 88) that λ k A 1 > λ k A > > λ k A k. 9) We combine 66), 67) in Theorem 7 with 9) to obtain 89) by induction. Next, suppose that the real numbers r 1,...,r k are defined via r j = x j+1 x j, 93) for every j = 1,...,k. Also, we define the real numbers σ 1,...,σ k via the formula σ j = λ A j λ Aj ) + 1, 94) for every j = 1,...,k. In other words, σ j is the largest root of the quadratic equation x λ A j ) x+1 = 0. 95) 18

We observe that σ 1 > > σ k 1, 96) due to 9) and 94). Also, r 1 > σ 1 > σ > 1, 97) due to the combination of 94) and 66). Suppose now, by induction, that r j 1 > σ j > 1. 98) for some j k 1. We observe that the roots of the quadratic equation 95) are 1/σ j < 1 < σ j, and combine this observation with 98) to obtain We combine 99) with 93) and 67) to obtain r j 1 λ A j ) r j 1 +1 > 0. 99) r j = x j+1 x j = λ A j) x j x j 1 x j = λ A j 1 r j 1 < r j 1. 100) Also, we combine 94), 98), 100) to obtain r j = λ A j 1 r j 1 > λ A j 1 σ j = λ A j) σ j 1 σ j = σ j > σ j+1. 101) In other words, 98) implies 101), and we combine this observation with 97) to obtain Also, due to 100), r 1 > σ, r > σ 3,..., r k 1 > σ k. 10) r 1 > r > > r k 1. 103) We combine 93), 94), 10), 103) to obtain 90), 91). Corollary 3. Under the assumptions of Theorem 11, x k k > λ A λ Aj ) j + 1. 104) x 1 j= Remark 10. In [1], [13] the derivation of an upper bound on the first coordinate of an eigenvector of a certain matrix is based on analysis, similar to that of Theorem 11. 19

Theorem 1. Suppose that the integers 1 k < n, the real numbers A 1,...,A n and λ, and the vector x = x 1,...,x n ) R n are those of Theorem 11. Suppose also that j k is an integer, and that ε j 1,ε j are real numbers. Suppose furthermore that the real number x j+1 is defined via the formula x j+1 = λ A j ) x j 1+ε j ) x j 1 1+ε j 1 ). 105) Then, In particular, x j+1 x j+1 x j+1 ε λ A j j λ A j 1 + ε 1 j 1 λ A j 1). 106) x j+1 x j+1 x j+1 ε j +A k A j 1 + ε j 1 1+A k A j 1+A k A j ). 107) Proof. We combine 105) with 67) of Theorem 7 to obtain x j+1 x j+1 = λ A j ) x j ε j x j 1 ε j 1. 108) Also, due to the combination of 67) and 89) of Theorem 11, Then, due to 91) of Theorem 11, x j x j+1 < 1 λ A j 1. 109) x j 1 < x j x j+1 x. 110) j+1 We combine 108), 109) and 110) to obtain 106). We combine 106) with 88) to obtain 107). Remark 11. While the analysis along the lines the proof of Theorem 11 would lead to a stronger inequality that 106), the latter is sufficient for the purposes of this paper. In the following theorem, we study the behavior of several last elements of an eigenvector of A. Theorem 13. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1. Suppose also that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector whose last coordinate is positive, i.e. x n < 0. Suppose, in addition, that 1 k n is an integer, and that Then, λ A k. 111) 0 < x n < x n 1 < < x k < x k 1. 11) 0

Also, x j > A j λ + x j+1 for every j = k,...,n 1. In addition, λ Aj ) 1, 113) 1 > x k x k+1 > > x n x n 1 > x n 1 x n. 114) Proof. We defined the real numbers y 1,...,y n via the formula y j = 1) j+1 x n+1 j, 115) for every j = 1,...,n. Also, we define the real numbers B 1,...,B n via the formula B j = A n+1 j, 116) for every j = 1,...,n. In addition, we define the real number µ via the formula µ = λ. 117) We combine 115), 116), 117) with 67), 68) of Theorem 7 to obtain and y = µ B 1 ) y 1 > 0 118) y j+1 = µ B j ) y j y j 1, 119) for every j =,...,n 1. We combine 116), 117) with 111) to obtain µ B n+1 k +. 10) Following the proof of Theorem 11 with x j,a j,λ replaced, respectively, with y j,b j,µ) we use 118), 119), 10) to obtain 0 < y 1 < y < < y n+1 k < y n+ k, 11) as well as y j > µ B j y j 1 µ Bj ) + 1, 1) for every j =,...,n+1 k, and 1 < y n+1 k y n k < < y 3 y < y y 1. 13) Now 11), 113), 114) follow from the combination of 115), 116), 117), 11), 1) and 13). 1

The following definition will be used to emphasize that the elements of an eigenvector of A exhibit different behaviors in the beginning, in the middle and in the end. Definition. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1. Suppose also that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector whose first coordinate is positive, i.e. x 1 > 0. The eigenvalue λ induces the division of the set {1,...,n} into three disjoint parts I G λ),i O λ),i D λ), defined via the formulae In other words, if for some integer k,m, then I G λ) = {1 j n : λ A j }, 14) I O λ) = {1 j n : > λ A j > }, 15) I D λ) = {1 j n : λ A j }. 16) λ A k > λ A k+1 > > λ A m > λ A m+1 17) I G λ) = {1,...,k}, I O λ) = {k +1,...,m}, I D λ) = {m+1,...,n}. 18) The sets I G λ),i O λ),i D λ) are referred to as the region of growth, oscillatory region and region of decay, respectively. Remark 1. Obviously, depending on λ, some of I G λ),i O λ),i D λ) may be empty. For example, for the matrix A of Theorem 4 in Section 3.1, both I G λ) and I D λ) are empty for any eigenvalue λ. Remark 13. The terms region of growth and region of decay are justified by Theorems 11, 13, respectively. Loosely speaking, in the region of growth, the elements of x have the same sign, and their absolute values grow as the indices increase. On the other hand, in the region of decay, the elements of x have alternating signs, and their absolute values decay as the indices increase. In the rest of this subsection, we investigate the behavior of the elements of an eigenvector corresponding to λ in the oscillatory region I O λ) see Definition ). In the following theorem, we partially justify the term oscillatory region of Definition. Theorem 14. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1, that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector. Suppose also that the set I O λ), defined via 15) in Definition, is I O λ) = {k +1,...,m}, 19)

for some integer 1 k < m < n. Suppose furthermore that the real numbers θ k,...,θ m 1 are defined via the formula ) λ Aj+1 θ j = arccos, 130) for every j = k,...,m 1, that the two dimensional vectors v k,...,v m are defined via the formula ) xj v j =, 131) x j+1 for every integer j = k,...,m, and that the by matrices Q k,...,q m 1 are defined via the formula ) 0 1 Q j =, 13) 1 cosθ j ) for every integer j = k,...,m 1. Then, for every integer j = k,...,m 1. Proof. We observe that v j+1 = Q j v j, 133) 0 < θ k < θ k+1 < < θ m 1 < π, 134) due to the combination of 15) and 130). We also observe that the recurrence relation 133) follows immediately from the combination of Theorem 7 and 130), 131), 13). In the following theorem, we describe some elementary properties of a certain by matrix. Theorem 15. Suppose that 0 < θ < π is a real number. Then, ) ) ) i e sinθ) iθ 1 0 1 1 e iθ 1 e iθ 1 cosθ) e iθ = 1 Suppose also that x,y are real numbers. Then, ) ) x 1 = α y e iθ +β where the complex numbers α,β are defined via ) α i = β sinθ) moreover, e iθ 1 e iθ 1 1 e iθ ) e iθ 0 0 e iθ. 135) ), 136) ) ) x, 137) y β = e iθ α, 138) 3

where α denotes the complex conjugate of α. In particular, where Rz) denotes the real part of any complex number z. x = Rα), 139) y = Re iθ α), 140) Proof. The proof is straightforward, elementary, and is based on the observation that ) 1 e iθ 1 e iθ = 1 ) i e sinθ) iθ 1 1 e iθ. 141) In the following theorem, we evaluate the condition number of the by matrix of Theorem 15. Theorem 16. Suppose that 0 < θ < π is a real number, and that κθ) is the condition number of the matrix ) 1 e iθ Uθ) = e iθ. 14) 1 In other words, κθ) is defined via the formula κθ) = σ 1θ) σ θ), 143) where σ 1 θ),σ θ) are, respectively, the largest and smallest singular values of Uθ). Then, κθ) = 1+ cosθ) { ) )} θ θ 1 cosθ) = max tan,cot. 144) Proof. The proof is elementary, straightforward, and is based on the fact that the eigenvalues of the matrix Uθ) Uθ) are the roots of the quadratic equation x x+sin θ) = 0, 145) in the unknown x. In the following theorem, we introduce a complexification of the coordinates of an eigenvector of A in the oscillatory region see Definitions 1, ). Theorem 17. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1, that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector. Suppose also that the set I O λ), defined via 15) in Definition, is I O λ) = {k +1,...,m}, 146) 4

for some integer 1 k < m < n. Suppose furthermore that the real numbers θ k,...,θ m 1 are defined via 130) of Theorem 14, and that the complex numbers α k,...,α m 1 and β k,...,β m 1 are defined via the equation ) ) ) 1 e iθ j αj xj e iθ =, 147) j 1 in the unknowns α j,β j, for j = k,...,m 1. Then, for every j = k,...,m 1. Moreover, β j x j+1 x j = Rα j ), 148) ) x j+1 = R α j e iθ j, 149) α j+1 = α j e iθ j i I cosθ j) cosθ j+1 ) ) α j e iθj e θ i j+θ j+1 ), 150) sinθ j+1 ) sin for every j = k,...,m. θj +θ j+1 Proof. The identities 148), 149) follow from the combination of Theorems 14, 15. To establish 150), we proceed as follows. Suppose that k j m is an integer. We combine 131), 13), 133) of Theorem 14 with 147) to obtain ) ) ) ) xj+1 0 1 1 e iθ j αj = 1 cosθ j ) e iθ 151) j 1 x j+ We substitute 135), 141) of Theorem 15 into 151) to obtain ) ) ) ) xj+1 1 e iθ j e iθ j 0 αj = x j+ e iθ j 1 0 e iθ j β j ) 1 e iθ j αj e = iθ ) j e iθ j 1 β j e iθ. 15) j Next, we combine 138), 147) and 15) to obtain αj+1 β j+1 ) ) 1 e iθ j+1 1 ) 1 e iθ j αj e = iθ ) j e iθ j+1 1 e iθ j 1 β j e iθ j ) ) 1 1 e iθ j+1 1 e iθ j αj e = iθ ) j 1 e iθ j+1 e iθ j+1 1 e iθ j 1 α j e iθ j 1 1 e iθ j+1 +θ j ) e = iθ j e iθ ) j+1 αj e iθ ) j e iθ j e iθ j+1 1 e iθ j+1+θ j ) α j e iθ. 153) j 1 e iθ j+1 β j 5

It follows from 153) that 1 e iθ j+1 +θ j ) ) α j e iθ j + e iθ j e iθ j+1 ) αj e iθ j α j+1 = 1 e iθ j+1 = α j e iθ j + α j e iθ j+θ j+1) e iθ j+1 e iθ ) j +αj e iθ j+θ j+1) e iθ j+1 e iθ j+1 θ j ) ) 1 e iθ j+1 = α j e iθ j + α j e iθ j+θ j+1) e iθ j+1 e iθ ) j α j e iθ j+θ j+1) e iθ j+1 e iθ ) j = α j e iθ j +i I 1 e iθ j+1 ) αj e iθ j+θ j+1) e iθ j+1 e iθ j) 1 e iθ j+1 1 e iθ j+1, 154) where Iz) denotes the imaginary part of a complex number z. We observe that, for any real numbers t,h, 1 e it [ ] sint/) i = 1 eih sinh/) exp t h), 155) and use 155) to obtain Due to 156), However, 1 e iθ j θ j+1 ) 1 e iθ j+1 = sinθ j θ j+1 )/) sinθ j+1 ) e iθ j+θ j+1) e iθ j+1 e iθ ) j 1 e iθ j+1 sinθ j θ j+1 )/) sinθ j+1 ) ) θj θ j+1 sin = [ exp i [ ] i exp θ j 3 θ j+1 ). 156) θ j + θ j +θ j+1 = cosθ j) cosθ j+1 ) sinθ j +θ j+1 )/) Finally, we substitute 157), 158) into 154) to obtain 150). )]. 157) 158) Corollary 4. Under the assumptions of Theorem 17, α j+1 α j e iθ j cosθ j ) cosθ j+1 ) α j ), 159) sinθ j+1 ) sin for every j = k,...,m. Corollary 5. Under the assumptions of Theorem 17, θj +θ j+1 x j +x j+1 = 4R j 1+cosθ j ) cosθ j +ξ j )), 160) for every j = k,...,m, where the real numbers R j > 0 and ξ j are defined via for every j = k,...,m. α j = R j e iξ j, 161) 6

Proof. We combine 147) of Theorem 17 and 138) of Theorem 15 to obtain x j +x j+1 = α j + β j +cosθ j ) α j β j +α j β j )) = α j + cosθ j ) R )) α j β j )) = 4 α j +cosθ j ) R αj e iθ j. 16) We substitute 161) into 16) to obtain 160). In the following theorem, we discuss the zero-crossing of certain sequences. Theorem 18. Suppose that k > 0 is an integer, and that 0 < θ 1 < θ < < θ k < π 163) are real numbers. Suppose also that ε > 0 is a real number, that the sequence y 1,...,y k+ is defined via the formulae y 1 = y = 1 164) and yj+1 y j+ ) = ) ) 0 1 yj, 165) 1 cosθ j ) y j+1 for every j = 1,...,k, and that the sequence z 1,...,z k+ is defined via the formulae z 1 = 1, z = 1+ε 166) and zj+1 z j+ ) = ) ) 0 1 zj, 167) 1 cosθ j ) z j+1 for every j = 1,...,k. Suppose furthermore that Then, y 1,...,y k+1 > 0. 168) z 1,...,z k+1 > 0, 169) In other words, the sequence z 1,...,z k+ cannot cross zero before the sequence y 1,...,y k+ does. Proof. It follows from 168) that y y 1,..., y k+1 y k > 0. 170) 7

Also, we combine 164), 165), 166), 167) to obtain and the recurrence relations for every j = 1,...,m, and for every j = 1,...,m. Suppose now that 1 l < k, and that We combine 17), 173), 174) to obtain y y 1 = 1 < 1+ε = z z 1, 171) y j+ y j+1 = cosθ j ) y j y j+1, 17) z j+ z j+1 = cosθ j ) z j z j+1, 173) y l+1 y l < z l+1 z l. 174) y l+ y l+1 = cosθ l ) y l y l+1 < cosθ l ) z l z l+1 = z l+ z l+1. 175) In other words, 174) implies 175), and we combine this observations with 171) to obtain, by induction on l, y l+1 y l < z l+1 z l, 176) for all l = 1,...,k. We combine 170) with 176) to obtain 169). The following theorem is similar to Theorem 18 above. Theorem 19. Suppose that k > 0 is an integer, and that 0 < θ 1 < θ < < θ k θ < π 177) are real numbers. Suppose also that the sequence y 1,...,y k+ is defined via the formulae y 1 = y = 1 178) and yj+1 y j+ ) = ) ) 0 1 yj, 179) 1 cosθ j ) y j+1 for every j = 1,...,k, and that the sequence t 1,...,t k+ is defined via the formulae t 1 = t = 1, 180) and tj+1 t t+ ) = ) ) 0 1 tj, 181) 1 cosθ) t j+1 8

for every j = 1,...,k. In other words, as opposed to 179), the matrix is 181) is the same for every j. Suppose furthermore that Then, t 1,...,t k+1 > 0. 18) y 1,...,y k+1 > 0, 183) In other words, the sequence y 1,...,y k+ cannot cross zero before the sequence t 1,...,t k+ does. Proof. The proof is almost identical to the proof of Theorem 18, and is based on the observation that cosθ j ) 1 ρ cosθ) 1 r, 184) for any j = 1,...,k, and any real numbers 0 < r ρ. Inthefollowingtheorem, weaddressthequestionhowlongittakesforacertainsequence to cross zero. Theorem 0. Suppose that k > 0 and l > 0 are integers, and that 0 < θ k < θ k+1 < < θ k+l < π 185) are real numbers. Suppose also that ε > 0, and that the sequence x k,...,x k+l+ is defined via the formulae x k = 1, x k+1 = 1+ε, 186) and xj+1 x j+ ) = ) ) 0 1 xj, 187) 1 cosθ j ) x j+1 for every j = k,...,k +l. Suppose furthermore that l 1 ) θ k+l < π. 188) Then, x k,x k+1,...,x k+l > 0. 189) 9

Proof. Suppose that the sequence t k,...,t k+l+ is defined via the formulae t k = t k+1 = 1 190) and tj+1 t j+ ) = ) ) 0 1 tj, 191) 1 cosθ k+l ) t j+1 for every j = k,...,k+l note that the matrix in 191) is the same for every j). It follows from 190), 191) that tk+m t k+m+1 ) = 0 1 1 cosθ k+l ) for every m = 0,...,l+1. We observe that ) m ) 0 1 1 1 e iθ e imθ 0 = 1 cosθ) 1 e iθ e iθ 1 0 e imθ ) m ) 1, 19) 1 ) 1 e iθ e iθ 1 ), 193) for every real θ and every integer m = 0,1,... see also Theorem 15). We combine 19), 193) to obtain ) ) ) tk+m 1 1 e iθ k+l e imθ k+l = t k+m+1 1+e iθ k+l e iθ k+l 1 e imθ k+l e imθ k+l +e im 1)θ ) k+l In other words, = e iθ k+l/ = cosθ k+l /) 1 cosθ k+l /) e im+1)θ k+l +e imθ k+l ) cosm 1/) θk+l ). 194) cosm+1/) θ k+l ) t k+m = cosm 1/) θ k+l), 195) cosθ k+l /) for every m = 0,...,l+. We combine 195) with 188) to conclude that t k+1,...,t k+l > 0. 196) We combine 190), 191), 196) with Theorems 18, 19 to obtain 189). Corollary 6. Suppose, in addition, that l 3, and that k+1 j k+l is an integer. Then, x j+1 > x j > x j 1 3. 197) 30

Proof. We combine 185), 187) and 189) to obtain and hence We combine 187), 189) and 199) to obtain x j+ = x j+1 cosθ j ) x j > 0, 198) x j < x j cosθ j ) < x j+1. 199) and hence x j < x j+1 = x j cosθ j 1 ) x j 1, 00) x j 1 < x j cosθ j 1 ) 1 ) < 3x j. 01) We combine 199), 01) to obtain 197). The following theorem follows from the combination of Theorems 11, 17, 0 and Corollary 6. Theorem 1. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1, that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector with positive first coordinate, i.e. x 1 > 0. Suppose also that λ A 1 > > λ A k > λ A k+1 > > λ A k+l+1 > 0, 0) for some integer 1 k < n and 1 l n k 1. Suppose furthermore that ) λ A k+l+1 π > cos. 03) l 1 Then, Also, If, in addition, l 3, then 0 < x 1 < x < < x k < x k+1. 04) x k,x k+1,...,x k+l > 0. 05) and x k+1 > x k 3, x k+ > x k+1,..., x k+l > x k+l 3, 06) 3 3 x k+l 1 > x k+l. 07) 31

Proof. We obtain 04) from the combination of 0) and Theorem 11. Also, we obtain 05) from the combination of Theorems 17, 0. Finally, we combine Theorems 17, 0 and Corollary 6 to obtain 06), 07). Theorem. Suppose that the integers 1 k < n and l 3, the real numbers A 1,...,A n and λ, and the vector x = x 1,...,x n ) R n are those of Theorem 1. Suppose also that k+1 j k+l 3 is an integer, and that ε j 1,ε j are real numbers. Suppose furthermore that the real number x j+1 is defined via the formula x j+1 = λ A j ) x j 1+ε j ) x j 1 1+ε j 1 ). 08) Then, x j+1 x j+1 x j+1 3 ε j λ A j )+ 9 4 ε j 1. 09) Proof. We combine 105) with 67) of Theorem 7 to obtain x j+1 x j+1 = λ A j ) x j ε j x j 1 ε j 1. 10) Also, due to the combination of 67) and 06) of Theorem 1, x j 1 x j+1 < 9 4. 11) We combine 10), 11) to obtain 09). Remark 14. While a more careful analysis would lead to a stronger inequality that 09), the latter is sufficient for the purposes of this paper. The following theorem is almost identical to Theorem 14, with the only difference that the recurrence relation is reversed. Theorem 3. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1, that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector. Suppose also that the set I O λ), defined via 15) in Definition, is I O λ) = {k +1,...,m}, 1) for some integer 1 k < m < n. Suppose furthermore that the real numbers ϕ k,...,ϕ m 1 are defined via the formula ϕ j = arccos λ A ) j+1, 13) for every j = k,...,m 1, that the two dimensional vectors w k,...,w m+1 are defined via the formula xj 1) w j = m j ) x j 1 1) m j+1, 14) 3

for every integer j = k,...,m + 1, and that the by matrices P k,...,p m 1 are defined via the formula ) 0 1 P j =, 15) 1 cosϕ j ) for every integer j = k,...,m 1. Then, w j = P j 1 w j+1, 16) for every integer j = k +1,...,m. Proof. The proof is essentially identical to that of Theorem 14. The following theorem is similar to Theorem 17, with the only difference that the recurrence relation is reversed. Theorem 4. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1, that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector. Suppose also that the set I O λ), defined via 15) in Definition, is I O λ) = {k +1,...,m}, 17) for some integer 1 k < m < n. Suppose furthermore that the real numbers ϕ k,...,ϕ m 1 are defined via 13) of Theorem 3, and that the complex numbers γ k,...,γ m 1 and δ k,...,δ m 1 are defined via the equation ) ) 1 e iϕ j γj xj+ 1) e iϕ = m j ) j 1 x j+1 1) m j 1, 18) in the unknowns γ j,δ j, for j = k,...,m 1. Then, for every j = k,...,m 1. Moreover, δ j x j+ = Rγ j ) 1) m j, 19) x j+1 = R γ j e iϕ j ) 1) m j 1, 0) γ j 1 = γ j e iϕ j i I cosϕ j) cosϕ j 1 ) ) γ j e iϕj e ϕ i j+ϕ j 1 ), 1) sinϕ j 1 ) sin for every j = k +1,...,m 1. ϕj +ϕ j 1 Proof. The proof is essentially identical to that of Theorem 17, and will be omitted. Corollary 7. Under the assumptions of Theorem 4, for every j = k +1,...,m 1. γ j 1 γ j e iϕ j γ j cosϕ j ) cosϕ j 1 ) ), ) sinϕ j 1 ) sin 33 ϕj +ϕ j 1

Corollary 8. Under the assumptions of Theorem 4, x j+1 +x j+ = 4R j 1+cosϕ j ) cosϕ j +ξ j )), 3) for every j = k,...,m 1, where the real numbers R j > 0 and ξ j are defined via for every j = k,...,m 1. γ j = R j e iξ j, 4) Proof. We combine 18) of Theorem 4 and 138) of Theorem 15 to obtain x j+1 +x j+ = γ j + δ j +cosϕ j ) γ j δ j +γ j δ j )) = γ j + cosϕ j ) R γ j δ j )) = 4 γ j +cosϕ j ) R γ j e iϕ j )). 5) We substitute 4) into 5) to obtain 3). The following theorem is analogous to Theorem 1. Theorem 5. Suppose that the n by n symmetric tridiagonal matrix A is that of Definition 1 in Section 4.1, that λ is an eigenvalue of A, and that x = x 1,...,x n ) R n is a corresponding eigenvector. Suppose also that 0 > λ A m r > > λ A m > λ A m+1, 6) for some integer 1 m < n and 1 r m 1. Suppose furthermore that ) A m r λ π > cos, 7) r 1 and that x m > 0. Then, Also, If, in addition, r 3, then 0 < x n 1) n m < x n 1 1) n 1 m < < x m+1 < x m. 8) x m+1,x m, x m 1,...,x m r 1) 1) r 1 > 0. 9) and x m > x m+1 3, x m 1 > x m,..., 3 1) r 3 x m r 3) > 1) r 4 x m r 4), 30) 3 1) r x m r ) > 1) r 3 xm r 3). 31) 34