Inverses, determinants, eigenvalues, and eigenvectors of real symmetric Toeplitz matrices with linearly increasing entries

Size: px
Start display at page:

Download "Inverses, determinants, eigenvalues, and eigenvectors of real symmetric Toeplitz matrices with linearly increasing entries"

Transcription

1 Inverses, determinants, eigenvalues, and eigenvectors of real symmetric Toeplitz matrices with linearly increasing entries F. Bünger Institute for Reliable Computing, Hamburg University of Technology, Schwarzenbergstr. 95, D-073 Hamburg, Germany address: Abstract We explicitly determine the sew-symmetric eigenvectors and corresponding eigenvalues of the real symmetric Toeplitz matrices T = T (a, b, n) := (a + b j ) j, n of order n 3 where a, b R, b 0. The matrix T is singular if and only if c := a = n. In this case we also explicitly determine the symmetric b eigenvectors and corresponding eigenvalues of T. If T is regular, we explicitly compute the inverse T, the determinant det T, and the symmetric eigenvectors and corresponding eigenvalues of T are described in terms of the roots of the real self-inversive polynomial p n (δ; z) := (z n+ δz n δz +)/(z +) if n is even, and p n (δ; z) := z n+ δz n δz+ if n is odd, δ := +/(c+n ). Keywords: Toeplitz matrix, inverse, determinant, eigenvalue, eigenvector 00 MSC: Primary 5B05; Secondary 5A8, 5A09. Introduction, main results For a, b R, b 0, and n N 3 we consider the real symmetric Toeplitz matrices T = T (a, b, n) := (a + b j ) j, n of order n. For example, subclasses of these matrices occur in the literature as test-matrices for computational algorithms for the inversion of symmetric matrices. For instance, the matrices T (n,, n) = (n j ) j, n occur in the matrix collections of Gregory and Karney [6], pp. 3,3, and Westlae [6], p. 37, both Preprint submitted to Linear Algebra and its Applications July 30, 04

2 referring to Lietze and Stoughton [], where the analytical inverse is given. Todd [3], pp. 3-35, describes the matrices T (0,, n) = ( j ) j, n in detail. Again the analytical inverse is explicitly constructed, which can be deduced from a matrix inversion formula given by Fiedler for the even more general class of symmetric matrices C = (c max(j,) c min(j,) ) j, n where c,..., c n R satisfy c i c i+ for i {,..., n } and c n c (see [3], p. 3). Also asymptotic bounds for the eigenvalues T (0,, n) are discussed in [3]. By other means Bogoya, Böttcher, and Grudsy [] investigated the more general class of Hermitian n n Toeplitz matrices T n = T n [a 0, a,...a n ] with polynomially increasing first row entries a = p(), = 0,..., n, where p(x) = α i=0 p ix i is some polynomial of degree α N. Besides establishing general spectral properties of these matrices they derive special results for the linear case p(x) = a + bx in which T n equals T (a, b, n). We will give a unified approach and closed formulas for inverses, determinants, eigenvalues, and eigenvectors of the matrices T (a, b, n) which, as an application, will sharpen the corresponding results in []. Doing this, we use results of Yueh [8], Yueh and Cheng [9], and Willms [7] concerning eigenvalues and eigenvectors of tridiagonal matrices with perturbed corners. Clearly, since T (a, b, n) = b T ( a,, n), it suffices to consider the matrices T (c, n) := T (c,, n), c R. The main results that we will prove in the b subsequent sections are now formulated. Theorem. The matrix T := T (c, n) = (c + j ) j, n, c R, n N, is regular if and only if c n. In this case and if n 3, then τ τ with τ := T = τ τ In general, for every c and every n, we have c + n. () det T = ( ) n n (c + n ). () These matrices themselves build a subclass of the even more general class of generalized Kac-Murdoc-Szegö matrices introduced and analyzed by Trench [4].

3 For a, b R, b 0, c := a, Theorem implies b det T (a, b, n) = b n det T (c, n) = ( b) n b ( c + n ) = ( b) n ( a + b n ). (3) As an application, we immediately see that according to Sylvester s criterion T (a, b, n) is positive definite if and only if b < 0 and c < n ( or equivalently a > b n ). (4) In [], Theorem.3 a), it was proved that the matrices T := T (R, h, n), R, h R >0, n N, are positive definite if This condition is equivalent to s n := R(n ) hn(n ) h 4. R h n ( R n h + ) 4 which by (4) with a := R, b := h, c := R h can be weaened to R h n > 0. Real symmetric Toeplitz-matrices of order n N possess an orthogonal basis of eigenvectors consisting of n sew-symmetric and n n symmetric eigenvectors where a vector v = (v,..., v n ) t R n is called symmetric if v = v n+ and sew-symmetric if v = v n+ for all {,..., n} (see [], Theorem ). The following theorem distinguishes between these two inds of eigenvectors of regular matrices T (c, n). Thans to Prof. A. Böttcher, TU Chemnitz, for noting this application of Theorem. 3

4 Theorem. Let T := T (c, n) = (c+ j ) j, n, c R\{ n }, n N 3, m := n/. a) The eigenvalues λ of T corresponding to sew-symmetric eigenvectors v () = (v (),..., v n () ) t R n, {,..., m}, are ( ) ( )π λ = + cos. (5) n The components of v () can be scaled to { v () j = v () n+ j = cos (j )( )π if j m, n 0 if j = m + and n is odd, (6) for j {,..., m}. b) The eigenvalues µ of T corresponding to symmetric eigenvectors w () = (w (),..., w n () ) t R n, {,..., n m}, are µ = ( + cos θ ) (7) where z = e iθ, θ R := (0, π] ir >0 π + ir >0, i :=, are n m pairwise distinct roots of the real self-inversive polynomial 3 of degree (n m) z n+ δz n δz+ = z n + ( + δ) n ( z) j + if n is even, p n (δ; z) := z+ j= (8) z n+ δz n δz + if n is odd, with δ := + R\{}. Equivalently, the θ c+n are the distinct zeros in R of the trigonometric function P n (δ; θ) := cos θ ( ) cos (n+)θ δ cos (n )θ if n is even, cos (n+)θ δ cos (n )θ if n is odd. (9) 3 A real polynomial p(z) = n =0 p z of degree n N 0 is called self-inversive if p (z) := z n p(/z) = n =0 p n z = ±p(z). Roots of self-inversive polynomials occur in reciprocal pairs z, z C\{0}. 4

5 The eigenvectors w () can be scaled to w () m+j = w() n m+ j = cos(j )θ if n is even and θ π, ( ) j (j ) if n is even and θ = π, cos(j )θ if n is odd, (0) j {,..., n m}. Moreover, p n (δ; z) possesses a root z =, that is θ = π for some, if and only if one of the following two cases holds true: (i) n is even and δ = n+, i.e., c = n +, n n (ii) n is odd and δ =, i.e., c = n. In both cases the remaining z are roots of p n (δ; z)/(z + ). The location of the roots of the polynomials p n (δ; z) defined in (8) is elucidated in the sequel. The proof of these facts is, although straightforward, quite technical and therefore placed in the appendix. The stated inclusions of the roots might be helpful from a numerical point of view for choosing appropriate starting points of a Newton s method for finding numerical approximations of the roots. Figure clarifies the distinguished cases. Let n N 3, m := n/, δ R, and p(z) := p n (δ; z) be the polynomial of degree (n m) defined in (8). Since p n (δ; z) is self-inversive of even degree, its roots occur in pairs (z, z ), =,..., n m, with z = e iθ, θ C, Re(θ ) [0, π], Im(θ ) R 0, for =,..., n m. Suppose that the roots are ordered such that Re(θ ) Re(θ ) Re(θ n m ). P) If δ = 0, then θ = 4 π, =,..., n m. n+ P) If δ =, then θ = ( ) π, =,..., n m. In particular, z n = is a double root of p n (δ; z). 5 P3) If δ =, then θ = π, =,..., n m. n P4) If δ ±, then θ i (+ ), that is z 0, and θ 3 =,..., n m. ( P5) If δ (0, ), then θ ( ) π, ), π =,..., n m. n n+ P6) If δ (, 0), then θ ( π, π), =,..., n m. n+ n n π, 4 This corresponds to c = n+ in Theorem b). 5 In Theorem b) δ = + c+n does not attain the value δ =. We do not exclude this case here since it corresponds to the asymptotic cases c ±. 5

6 ( P7) If δ (, ), then θ i (0, ), i.e., z (0, ), and θ 3 n ) ( ) π, π, n =,..., n m. P8) If δ (, ), then θ ( π, π) for =,..., n m. If n is n n odd or δ (, n+), then θ n n m π+i (0, ), that is z n m (, 0). If n is even and δ ( n+, ), then θ n m = θ n m ( n π, π). If n is n even and δ = n+, then θ n m = π. In general: for odd n if δ <, then all n+ roots of p(z) have modulus and are non-real and simple. If δ = ±, then p(z) has n non-real simple roots of modulus and one double root z = δ. If δ >, then p(z) has n non-real simple roots of modulus and two simple real roots z, z with sign(z) = sign(δ). For even n if δ ( n+, ), then all n roots of p(z) n have modulus and are non-real and simple. If δ { n+, }, then p(z) has n n non-real simple roots of modulus and one double root z = sign(δ). If δ R\[ n+, ], then p(z) has n non-real simple roots of modulus n and two simple real roots z, z with sign(z) = sign(δ). Moreover, in the cases P5) to P8) the real angles θ = θ (δ) and the real roots z = z (δ) [case P7)], z n m = z n m (δ) [case P8)] are monotonically decreasing functions of δ moving from the upper interval boundary to the lower with speed θ (δ) = for all =,..., n m. In particular, cos (n )θ (δ) n+ sin (n+)θ (δ) + (n )δ sin (n )θ (δ) 0, () lim(cos θ (δ)) = lim(cos θ (δ)) = δ δ n. () The inclusions for the real roots z, z n m of p n (δ, z) in the cases P7) and P8) respectively can be sharpened by using standard bounds for roots of real polynomials, for example Cauchy s bound. P7 ) P8 ) ζ := z (δ, δ + ρ] with ρ := min(, δ ). If n is odd, then ζ := zm+ [δ ρ, δ) with ρ := min(, δ ). If n is even and δ < n+, then ζ := z n m (δ, δ + ρ) with ρ := min(, δ + ). As an application, for T (0,, n) Theorem and P7) with c := 0 and δ = n+ (, ) show that T (0,, n) has exactly one positive eigenvalue n α := µ = ( + cosh θ ) (3) 6

7 P5) δ (0,), n=0 P6) δ (,0), n=0 imaginary axis δ= δ=0 δ=0.6 imaginary axis δ=0 δ= δ= real axis real axis imaginary axis P7) δ (, ), n=0 δ > + δ= δ= δ (δ+) real axis imaginary axis P8) δ ( (n+)/(n ), ), n=9 odd δ= δ > δ=.5 δ (δ ) real axis P8) δ ( (n+)/(n ), ), n=0 even P8) δ (, (n+)/(n )), n=0 even imaginary axis δ= δ > δ =. ( /n)π π imaginary axis δ= δ > δ= 3 δ (δ+) real axis real axis Figure : Location of the roots z,..., z n m corresponding to the cases P5) to P8) 7

8 and n negative eigenvalues α i = ( + cos β i ) < (4) for suitable angles β i (0, π), i =,..., n. This proves a numerical observation stated in [], p. 7, and sharpens Theorem. b) of that paper where the weaer upper bound /4 for the negative eigenvalues of T (0,, n) was given. The next theorem explicitly states the eigenvalues and eigenvectors of the singular matrices T ( n, n). Theorem 3. Let T := T ( n n, n) = ( + j ) j, n, n N 3, and m := n/. a) The sew-symmetric eigenvectors and corresponding eigenvalues of T are the same as in the regular case stated in Theorem a). b) For {,..., n m} { ( ) + cos µ := ( )π if {,..., n m }, n (5) 0 if = n m, are the eigenvalues of T corresponding to symmetric eigenvectors w () = (w (),..., w n () ) t where cos (j )( )π if n is even, m, w () m+j = w() n m+ j = n cos (j )( )π if n is odd, m, (6) n δ n m,j if = n m, 6 for j {,..., n m}. In particular, T has ran n and the -dimensional ernel is spanned by the vector (, 0,..., 0, ) t R n. Now we deduce some consequences of the above results concerning the multiplicities and the so-called Iohvidov parameters of the eigenvalues of the matrices T (c, n). First we recall the less nown definition of the Iohvidov 6 Here δ i,j is Kronecer s delta. 8

9 parameter of an eigenvalue of a real symmetric Toeplitz matrix. With a column vector v = (v,..., v n ) t R n, n N, we associate the polynomial v(z) := (, z,..., z n )v = n = v z in the unnown z. If v is the eigenvector of some matrix T R n,n, then we say that v(z) is an eigenpolynomial of T. The following well-nown description of eigenpolynomials of symmetric Toeplitz matrices can be found in [5] and [8]. Theorem 4. Let T R n,n be a symmetric Toeplitz matrix and λ R be an eigenvalue of T of multiplicity m {,..., n}. If r {0,..., n } is the largest integer such that λ is not an eigenvalue of the upper left r r submatrix T r := (T j ) j, r of T, then λ is a simple eigenvalue of T r+ and the corresponding eigenpolynomial p(z) = p(t, λ; z) is self-inversive and can be chosen to be monic. The Iohvidov parameter l = l(t, λ) := (n m r)/ of λ as an eigenvalue of T is a non-negative integer and the space of eigenpolynomials of T corresponding to λ consists of all polynomials of the form z l p(z)q(z) where q(z) is an arbitrary real polynomial of degree deg(q) m. Corollary 5. Let T := T (c, n) := (c + j ) j, n, c R, n N 3. a) If c n, then T has only simple eigenvalues. If c = n and n is even, then all eigenvalues have multiplicity. If c = n and n is odd, then the eigenvalue λ = b/ of smallest absolute value ( λ is the smallest singular value) is simple and all others have multiplicity. b) The eigenvalues of T corresponding to sew-symmetric eigenvectors have Iohvidov parameter 0. If T is regular (c n ), then also all eigenvalues corresponding to symmetric eigenvectors have Iohvidov parameter 0. If T is singular (c = n ), then all non-zero eigenvalues corresponding to symmetric eigenvectors have Iohvidov parameter and the eigenvalue 0 has Iohvidov parameter 0. Finally, before proving these results, we want to point out the connection to another class of real autocorrelation Toeplitz matrices. For m, n N the real symmetric n n Toeplitz matrices A = A(m, n) := (max(m j, 0)) j, n (7) 9

10 are autocorrelation matrices of the discrete signals x := ( m j= δ j,) Z with impulse length m. The corresponding autocorrelation function is R xx (t) := j x jx j t and A j, = R xx ( j ) for j, {,..., n}. In particular, A is positive-definite. To have a picture of the matrices A(m, n) in mind we state the following examples: A(7, 5) = , A(3, 5) = Another definition of A via its Laurent polynomial is A j, = π π π p(e iθ )p(e iθ )e i(j )θ dθ, j, {,..., n} where p(z) := zm = m z =0 z and i :=. The matrix A is non-negative and irreducible since its graph is strongly connected as A j,j = A,+ = m 0 for j =,..., n and =,..., n. Hence, by the Perron- Frobenius Theorem, the largest eigenvalue of A is simple. For m n, we have A(m, n) = T (m,, n) and therefore Theorem implies, since m n, that actually all eigenvalues of A(m, n) are simple. For m < n, repeated zeros occur in the upper right and lower left corner of A(m, n). Thus, for growing n and constant m a band structure with bandwidth m is build out. In this case the eigenvalues of A(m, n) are not simple in general. For example, symbolic computations suggest that the matrices A(m, m(m )) for m 4 have the eigenvalue of multiplicity m.7 On the other hand, numerical simulations support Conjecture 6. The smallest eigenvalue of A is simple. To the authors nowledge the interesting class of integral, symmetric, positive definite, non-negative, and irreducible Toeplitz matrices A(m, n) was explicitly treated for m n only by L. Rehnquist [] who made an approach to compute the inverses. Since the class A(m, n) has so many structural properties and such a tempting simple pattern from the purely matrix. 7 Thans to Prof. S.M. Rump, TU Hamburg-Harburg, for pointing out this regular behavior. 0

11 theoretical point of view, we hope to encourage further research especially on its spectrum.. Proof of Theorem For small dimensions n {, } the assertion is easily checed and we may therefore assume n 3 in the sequel. Define A := T (0,, n) = ( j ) j, n, and n := (,..., ) t R n. Then, T = A + c n t n is simply a ran- update of the matrix A. The inverse B := A = (b j ) j, n and the determinant of A were derived by Fiedler, see [3], p. 3,3: b j = n (n ) if j = {, n}, if j = {, n }, if j =, if (j, ) {(, n), (n, )}, (n ) 0 otherwise, det A = ( ) n n (n ). (8) Thus, A = n n n n. (9) Using the Sherman-Morrison-Woodbury formula, c.f. [7], with u := c n and v := n yields that T = A+uv t is invertible if and only if +u t A v 0, and in this case From (9), u t A v = c n and A uv t A = c(a n )( t na ) = T = A A uv t A + u t A v. (0) c (n ) ()

12 Therefore, T is invertible if and only if and in this case T = A c n, () c (n )(c + n ) E (3) where E := (, 0,..., 0, ) t (, 0,..., 0, ) is the matrix with ones in the corners that appears in (). Formula (3) is equivalent to (). Finally, (8) and the matrix determinant lemma give (): det T = det(a + u t v) = ( + v t A u) det A ( = + c ) ( ) n n (n ) = ( ) n n (c + n ). n 3. Proof of Theorem We define S := T = + τ τ τ + τ (4) with τ :=. We will determine the eigenvalues and eigenvectors of c+n S which directly correspond to those of T. The matrix S can be viewed as a tridiagonal Toeplitz matrix with four symmetrically perturbed corners. Yueh and Cheng [9] gave formulas for eigenvalues and eigenvectors in the even more general case of tridiagonal Toeplitz matrices with all four corners arbitrarily perturbed. 8 But they derive explicit formulas only for certain 8 Similarly, Jain [9] also considers Toeplitz matrices with arbitrarily perturbed four corners but he only investigates the positive definite ones among them in greater detail, the orthonormal basis of eigenvectors of which build his sinusoidal family of unitary transforms. He proves asymptotic equivalence of the members of this family and general connections to Marov processes but he does not compute or enclose eigenvalues and eigenvectors explicitly. Thus, his wor is of minor relevance in our context.

13 perturbations and only implicit ones otherwise. Therefore, we do not apply their results directly but use the symmetry of the matrix S for a reduction first. Since S is symmetric and centrosymmetric, that is JSJ = S, where J = J n := (δ n j+, ) j, n is the flip-matrix, we can conjugate S in a wellnown fashion to obtain a bloc structure that corresponds to the so-called even and odd factors of the characteristic polynomial of S (see [], Theorem, and also [5]). First, set m := n, J := J m and partition S into blocs S = S = [ ] A JBJ if n is even, B JAJ A s JBJ (5) s t s t J if n is odd (6) B Js JAJ where A := B := + τ R m,m, (7) γ Rm,m, (8) τ γ := if n is even, and γ := 0 if n is odd, and s := (0,..., 0, ) t R m if n is odd. Next, define the orthogonal matrix K := [ ] I J if n is even, (9) J I K := I 0 J 0 0 if n is odd. (30) J 0 I 3

14 Finally, conjugate S with K to obtain [ ] A JB 0 KSK t = if n is even, (3) 0 JAJ + BJ A JB 0 0 KSK t = 0 s t J if n is odd. (3) 0 Js JAJ + BJ The blocs are easily identified as C = C m := A JB = γ D = D m := JAJ + BJ = γ... + δ with δ := + τ = + R m,m, (33) R m,m (34) c + n, E = E m+ := = [ ] s t J Js Dm δ R m+,m+, n odd. (35) Thus, in order to determine the eigenvalues and eigenvectors of S, by (3) and (3) we have to find those of the symmetric tridiagonal matrices C m for even and odd n, D m for even n and E m+ for odd n. 4

15 3.. Eigenvalues and eigenvectors of C m The matrices C m and D m can be viewed as tridiagonal Toeplitz matrices with perturbed upper left and lower right corners. Eigenvalues and eigenvectors of such matrices were studied by da Fonseca [3], Kouachi [0], Willms [7], Yueh [8], and Yueh and Cheng [9]. Let A n := α + b c a b c a b c a β + b R n,n (36) with n N 3, a, b, c, α, β R, a c 0. In Theorems and 3 of [8] Yueh exactly states the eigenvalues and eigenvectors for special cases of α and β that fit to the matrices C m in our context 9 a. Set ρ := and d := a c c. Theorem 7 (Yueh). Suppose α = 0 and β = d 0. Then, the eigenvalues λ,..., λ n of A n are given by λ = b + d cos The corresponding eigenvectors u () given by ( )π n +, =,, 3,..., n. (37) = (u (),..., u () n ) t, =,..., n, are u () j = ρ j ( )jπ sin n +, j =,, 3,..., n. (38) In case β = 0 and α = d 0, the eigenvalues are given by (37) and the corresponding eigenvectors by v () j = ρ j cos ( )(j )π, j =,, 3,..., n. (39) (n + ) Theorem 8 (Yueh). Suppose α = β = d 0. λ,..., λ n of A n are given by λ = b + d cos Then, the eigenvalues ( )π n, =,, 3,..., n. (40) 9 Those two Theorems correspond to the four cases , p. 646, of Willms [7]. 5

16 The corresponding eigenvectors u () given by = (u (),..., u () n ) t, =,..., n, are u () j = ρ j ( )(j )π sin, j =,, 3,..., n. (4) 4n In case α = β = d 0, the eigenvalues are given by (40) and the corresponding eigenvectors by v () j = ρ j ( )(j )π cos, j =,, 3,..., n. (4) 4n For even n the matrices C m are the matrices A m with a = = c = ρ = d, b =, and α = = β = d so that its eigenvalues and eigenvectors are given by (40) and (4) with n = m. For odd n the matrices C m are the matrices A m with a = = c = ρ = d, b =, α = = d, and β = 0 so that its eigenvalues and eigenvectors are given by (37) and (39) with n = m. By (4) we simply have to invert the eigenvalues of S and hence of C m and multiply them by two to obtain the eigenvalues of T stated in (5). The corresponding sew-symmetric eigenvectors given in (6) are obtained from those of C m by a sew-symmetric extension: a vector v R m is sewsymmetrically extended to ˆv R n by setting ˆv j = v j for j =,..., m and ˆv j = v n+ j for j = m+,..., n which for odd n especially means ˆv m+ = 0. This finishes the proof of part a) of Theorem. 3.. Eigenvalues and eigenvectors of D m, n = m In order to compute the eigenvectors of D m for even n = m, we use the notation of Willms [7] describing the eigenvalues and eigenvectors of general matrices of type A n defined in (36). Following Willms [7] we introduce the function sin nθ if θ Zπ, sin θ g : Z C C, (n, θ) n if θ Zπ, (43) ( ) n n if θ (Z + )π which is continuous in θ. Theorem 9 (Yueh,Willms). The eigenvalues λ and corresponding eigenvectors v () = (v (),..., v () n ) t, =,..., n, of the matrix A n defined in (36), admit a representation λ = b + d cos θ (44) 6

17 where the θ are the n solutions (counting multiplicity) of v () j = g(n +, θ) + α + β g(n, θ) + αβ d d g(n, θ) = 0 (45) in the region R = {θ = (x + iy) 0 x π, x, y R} where roots on the boundary of R are counted with half weight. The components of the corresponding eigenvector v () can be scaled to if θ {0, π}, ρ j sin jθ + α d sin(j )θ sin θ ρ j (j + α(j )) d if θ = 0, ρ j ( ) j (j α(j )) d if θ = π, j {,..., n }. (46) Applied to D m, n even, we have n = m = n/, a = = c = ρ = d, b =, α = γ =, β = δ =. Therefore, (44), (45), and c+n (46) reduce to λ = ( + cos θ ) (47) 0 = g(n +, θ) ( + δ)g(n, θ) + δg(n, θ) v () j = = [g(n +, θ) g(n, θ)] δ[g(n, θ) g(n, θ)] (48) = if θ {0, π}, sin jθ sin(j )θ sin θ cos (j )θ cos θ if θ = 0, ( ) j (j ) if θ = π, j n. (49) The function g fulfills the following trigonometric identity (cf. Willms [7], p. 645, no. (8)): cos (j )θ if θ {0, π}, cos θ g(j, θ) g(j, θ) = if θ = 0, j N. (50) ( ) j (j ) if θ = π, The regularity of S implies λ 0, so (48) does not have a root θ = 0 by (47). Thus, by (50) and (48) a θ is either a solution of 0 = (n+)θ cos δ cos (n )θ cos θ = z n z n+ δ(z n + z) +, z := e iθ, (5) z + in R\{0, π}, that is z := e iθ C\{, 0, }, or θ = π and 0 = ( ) m (n + ) ( ) m (n )δ δ = n + n. (5) 7

18 Consequently, the second case forces the m remaining roots z {,..., m}\{} to be solutions of with 0 = zn+ + n+ n (zn + z) + (z + ) 3 (53) in R\{0, π}. Summing up, µ := bλ b = +cos θ, =,..., m, are the eigenvalues of T stated in (7). The corresponding symmetric eigenvectors w () = (w (),..., w n () ) t stated in (0) are the symmetric extensions of the v () given in (49) to the first m components: w () = ( ) 0 K t = v () ( Jv () v () ). Componentwise this means w () m+j = v () j = w () m+ j, j =,..., m. This finishes the proof of part b) of Theorem for even n = m Eigenvalues and eigenvectors of E m+, n = m + Finally, we determine the eigenvalues an eigenvectors of the matrices E = E m+ defined in (35) when n = m + is odd. It is somehow more convenient for us to transform E further by conjugating with the diagonal matrix Q := diag(,,..., ) R m+,m+, that is we actually consider F := Q EQ = δ (54) instead of E. The matrix F differs from E only in the (, )- and (, )-entry where is replaced by and respectively. Let us fix an eigenvalue λ C of F with corresponding eigenvector u = (u,..., u n ) t C n, n := m +. First we treat the case λ = 4. By (54), (F u) j = 4u j for j =,..., n implies u j = ( ) j u, j =,..., n. Evaluating the last component of (F u) j = 4u j yields (3 δ)u = (F u) n = 4u n = 4u which means = δ = +, that is, c = n. Thus, we see that the case λ = 4 c+n is included in (7) and the last line of (0) with θ = π and z := e iθ = being a double root of p n (δ; z) = z n+ + z n + z +. 8

19 Next, for λ 4, we follow the arguments of Yueh and Cheng [9] in order to determine the eigenvalues and eigenvectors of F. The eigenvector u can be extended to a sequence (u j ) j N0 C N 0 which fulfills the three-term recursion u + ( λ)u + u + = f (55) with boundary conditions and constraints f := u if =, δu n if = n, 0 else, (56) u 0 = 0 = u n +. (57) Setting := (δ,j ) j N0 = (0,, 0,... ) C N 0 and c := (c, c,... ) C N 0 for c C the recursion (55) reads in sequence notation ( + ( λ) + )u = (u + f) (58) where the product of two sequences a = (a j ) j N0 and b = (b j ) j N0 is the convolution a b = c = (c j ) j N0, c j := j =0 a b j. Equation (58) has the solution (u + f) u = + ( λ) + = ( ω γ ) (cu + f) γ + = i (sin jθ) j N0 (cu + f) (59) ω where γ ± := + λ ± ω = e ±iθ = cos θ ± i sin θ, θ C, Re θ [0, π], (60) are the roots of the quadratic polynomial z + ( λ)z + C[z] and ω := ( + λ) 4 = i sin θ 0 as 0 λ 4. Evaluating (59) componentwise yields u j = i ω (u sin jθ u sin(j )θ H(j n )δu n sin(j n )θ) (6) 9

20 for j where H is the Heaviside-function, that is H(x) = 0 if x 0, and H(x) = if x > 0. Let us first consider the case n = m + = m + n 5, that is n 3. Then,, n, n + are distinct and (6) evaluated for these values supplies u u = sin θ sin θ = cos θ (6) u n u sin θ = sin n θ u u sin(n )θ (63) 0 = sin(n + )θ u u sin n θ δ u n u sin θ. (64) Note that u (and u n ) cannot be zero, since otherwise (55) would imply u = 0. If n = 3, that is n =, (63) and (64) still hold. But (63) becomes u u sin θ = sin θ u u sin θ u u = sin θ sin θ = cos θ. (65) Thus, (6) also holds. Inserting (63) in (64) and replacing u u side of (6) gives by the right 0 = sin(n + )θ u sin n θ δ(sin n θ u sin(n )θ) u u = sin(n + )θ cos θ sin n θ δ(sin n θ cos θ sin(n )θ) = (sin(n + )θ sin(n )θ δ(sin n θ sin(n )θ)) = (cos n θ δ cos(n )θ) sin θ. Since sin θ 0, we obtain the following necessary condition for θ: 0 = cos n θ δ cos(n )θ. (66) By substituting z := e iθ we may convert this trigonometric equation to a polynomial one: 0 = z n + δ(z n + z) = z n+ + δ(z n + z) = p n (δ; z). (67) Thus, the eigenvalues of F (which are clearly those of E) are described by the roots of the polynomial p n (δ; z) in exactly the same way as those of D m in the preceding subsection. Hence, the eigenvalues of T corresponding to 0

21 symmetric eigenvectors are also for odd n those stated in (7). By (6) and (6) the components of the eigenvector u can be scaled to u j = sin jθ u u sin(j )θ = sin jθ cos(θ) sin(j )θ = sin jθ (sin(j )θ + sin jθ) = (sin jθ sin(j )θ) = cos(j )θ sin θ. The corresponding extended eigenvector of S and T can be chosen as w := ( ) 0 sin θ Kt = (cos m + j θ) j=,...,n Qu which establishes the last line of (0) and finishes the proof of Theorem. 4. Proof of Theorem 3 We use a continuity argument to prove Theorem 3. Consider a small perturbation c of c. Then, T := T ( c, n) is regular and its eigenvalues and eigenvectors are described in Theorem. By continuity they will converge for c c to those of T. First note that the sew-symmetric eigenvectors and corresponding eigenvalues of T given in Theorem a) do not depend on c, so that they coincide with those of T which proves part a) of Theorem 3. The quantity δ := + n converges to ± for c c =. Thus, c+n the roots of the polynomial p n (δ; z) := defined in (8) converge to those of p n (± ; z) := z n+ δz(z n +)+ z+ if n is even, z n+ δz(z n + ) + if n is odd, z(z n +) z+ if n is even, z(z n + ) if n is odd which are 0 and exp(iθ ) where θ := ( )π, {,..., n }. For n {,..., n m } we have θ R = (0, π] ir >0 π + ir >0. Inserting

22 these θ in (7) and (0) gives the first n m symmetric eigenvectors and corresponding eigenvalues stated in (5) and (6). Finally, the remaining root z = 0 may formally be written as 0 = exp(iθ l ), l := n m, with θ l := i (+ ). The corresponding perturbed θ l R converging to θ l supplies an eigenvector w (l) = ( w (l),..., w n (l) ) t R n with components given in (0): { w (l) m+j = w (l) cos(j l+ j = ) θ l if n is even, cos(j ) θ l if n is odd, j l. Scaling the first and last component of w (l) to by dividing through w (l) yields lim w (l) θ l θ l w (l) m+j = lim θ l θ w (l) l w (l) l+ j = lim θ l θ l cos(j ) θ l cos(l ) θ l cos(j ) θ l cos(l ) θ l if n is even, if n is odd, = δ j,l for j {,..., l}. Thus, the vector u := (, 0,..., 0, ) t R n is contained in the ernel of T which of course could have been verified directly without the above derivation: (T u) j = T j, +T j,n = c+ j +c+ n j = c+n = 0. This finishes the proof of part b) of Theorem Proof of Corollary 5 a) If T is singular, then Theorem 3 shows that all eigenvalues are simple, so that the assertion holds in this case. Thus, we may assume that T is regular. By Theorems the eigenvalues corresponding to sew-symmetric eigenvectors are pairwise distinct and the same holds true for the eigenvalues corresponding to symmetric eigenvectors by P)-P8). Indeed, this fact can be deduced directly without any special nowledge on the eigenvalues of T from the tridiagonal matrices C m, D m, and E m defined in the proof of Theorem (cf. (33) - (35)) which of course only have simple eigenvalues. Thus, if we assume that T has a multiple eigenvalue λ, then the corresponding eigenspace necessarily contains symmetric and sew-symmetric eigenvectors. 0 0 Indeed, this is a priori nown by a result of Delsarte and Genin [4], Theorem 8, p. 04, which says that an eigenspace of dimension d of a real symmetric Toeplitz matrix possesses a basis consisting of d (or d ) symmetric and d (or d ) sew-symmetric eigenvectors, that is, the numbers of symmetric and sew-symmetric basis eigenvectors split the eigenspace dimension as evenly as possible.

23 But then (5) and (7) imply λ = ( +cos θ) where θ = ( )π for some n {,..., m} satisfies 0 = cos (n + )θ δ cos (n )θ = ( + δ)( ) sin π n. We see that this is only possible for δ =, that is c = n. By P3) the eigenvalues of T corresponding to symmetric eigenvectors µ = ( + cos ( )π ), =,..., n m, n coincide with those corresponding to sew-symmetric eigenvectors (cf. (5)) with one exception where n = m + is odd and = n m = m + when µ m+ = b/ is the eigenvalue of smallest absolute value. b) By (6) the first component of a sew-symmetric eigenvector v () is v () = cos ( )π, which is distinct from 0 for {,..., m}. By Theorem n 4 the Iohvidov parameter of the corresponding eigenvalue is 0. If T is singular, then by (5) the first two components of a symmetric eigenvector w (), {,..., m}, are w () = w () = { cos ( )π = 0 if n m, if = n m, { ( ) sin ( )π 0 if n m, n 0 if = n m. Since by a) all eigenvalues are simple, this implies that the non-zero eigenvalues of T corresponding to symmetric eigenvectors have Iohvidov parameter one and the eigenvalue µ = 0 with eigenvector (, 0..., 0, ) t has Iohvidov parameter zero. Now suppose that T is regular. Then, by (0) the first component of a symmetric eigenvector w = (w,..., w n ) t can be found as { cos (n )θ if θ π, w = w n = ( ) m (n ) if n is even and θ = π, for some θ R := (0, π] ir >0 π + ir >0 satisfying cos(n + ) θ δ cos(n )θ = 0, δ = + /(c + n ). 3

24 Thus, if θ = π, then clearly w 0. But this also holds for θ π since otherwise θ must have the form θ = π for some {,..., n m } which n by (5) [or P4)] corresponds to singular T (formally δ = ± ) treated before. Thus, again w 0 shows that the Iohvidov parameter of the corresponding eigenvalue is 0. Appendix This appendix contains the rather technical proofs of P)-P8), P7 ), P8 ), (), (). The roots of p n (δ; z) are those of q n (δ; z) := z n + δ(z n + z) where for even n the multiplicity of the root is reduced by one. The roots of q n (δ; z) correspond to those of the trigonometric polynomial Q n (δ; θ) := e i(n+) θ qn (δ; e iθ ) = cos (n + )θ δ cos (n )θ. Actually, we may view q n (δ; z) (and Q n (δ; θ)) as a homotopy of (trigonometric) polynomials which causes the distinguished cases for δ. To mae this clear, let us loo at the polynomials q n (0; z) = z n+ + q n (; z) = z n+ z n z + q n ( ; z) = z n+ + z n + z + q n (± ; z) := z n + z. with corresponding trigonometric polynomials (n + )θ Q n (0; θ) = cos Q n (; θ) = (n + )θ (n )θ cos cos Q n ( ; θ) = (n + )θ (n )θ cos + cos Q n (± ; θ) := e i(n+) θ qn (± ; e iθ ) = cos = sin nθ sin θ = cos nθ (n )θ. cos θ Now for δ [0, ], q n (δ, z) is a homotopy of q n (0; z) to q n (; z), and for δ [, 0], q n (δ, z) is a homotopy of q n (0; z) to q n ( ; z). If δ [, + ], δ q n (δ, z) is a homotopy of q n (; z) to q n (± ; z), and finally, if δ [, ], 4

25 δ q n (δ, z) is a homotopy of q n ( ; z) to q n (± ; z). The same holds true for the trigonometric polynomials. The whole thing now is to establish the stated inclusions for the zeros of p n (δ; z) by those of the polynomials q n (0; z), q n (; z), q n ( ; z), q n (± ; z) which are immediately deduced from the trigonometric representations given above. If we define then θ (δ) θ (0) := π, n + =,..., n m, θ () ( ) := π, n =,..., n m, θ ( ) := π, n =,..., n m, := π, n =,..., n m, θ (± ) [0, π] and Q(δ, ±θ (δ) ) = 0 for δ {0,,, ± } and the given. The range of the is already chosen such that for even n the root z = that corresponds to θ = π is neglected. This proves P) to P4). For d) note that z = 0 clearly is the remaining root of q n (± ; z) = z(z n + ). This case corresponds to singular matrices T (c, n) as already shown in the proof of Theorem 3. Define the open intervals I (0,) := (θ () I (,0) := (θ (0) I (, ) := (θ (± ) := (θ ( ) I (, ), θ(0) ), =,..., n m,, θ( ) ), =,..., n m,, θ() ), =,..., n m,, θ (± ) ), =,..., n m. Then, it is routine to chec that none of the angles θ (± ) j, j =,..., n m, is contained in any of these intervals. But this means that the function f(θ) := Q n (0; θ) Q n (± ; θ) (n+)θ cos = cos (n )θ (68) restricted to any of those intervals is well defined an continuous and can be continuously extended to the boundaries in those cases where they differ from the zeros θ (± ) of the denominator Q n (± ; θ). Since f(θ () ) =, ) = 0 the intermediate value theorem yields (0, ) f(i(0,) ). Thus, for each δ (0, ) there is a θ = θ (δ) I (0,) such that f(θ ) = δ which f(θ (0) 5

26 is equivalent to Q n (δ, θ ) = 0. This proves P5) and analogously P6). In the same way f(θ () ) = and lim f(θ) = + imply (, + ) f(i (, ) θ θ (± ) ). Hence, again there exists for each δ (, + ) a θ = θ (δ) I (, ) such that f(θ ) = δ so that Q n (δ, θ ) = 0. This proves the inclusions for θ stated in P7) for =,..., n m and analogously the inclusions for θ given in P8) for =,..., n m. For δ (, + ) we have q n (δ; 0) = > 0 > ( δ) = q n (δ; ). Using the intermediate value theorem we find the last remaining root z (0, ) to prove P7). Next, we prove P8). Let δ (, ) and suppose that n is odd, then q n (δ; ) = ( δ ) < 0 < = p n (δ; 0) and again the intermediate value theorem supplies the missing root z n m (, 0). Now suppose that n is even, then all coefficients of q n (δ; z) = z n+ + δ z n + δ z + are positive, whence q n (δ; z) does not have positive real roots. The coefficients of q n (δ; z) = z n+ + δ z n δ z + change their signs three times, so that by Descartes rule of signs q n (δ; z) has either three or one negative real roots where is clearly one of them. Looing at the derivative q n(δ; z) = (n + )z n δ(nz n + ) we see < 0 if δ < n+, q n(δ; n ) = (n + ) + δ(n ) = 0 if δ = n+ n, > 0 if δ > n+ n. Thus, if δ < n+, then q n n(δ; z) assumes negative values in (, 0) and, since q n (δ, 0) = > 0, there must be at least one root z m = z n m in (, 0). By the preceding considerations zm,, z m are all real roots of q n (δ; z). If δ = n+, then z = is a three-fold root of q n n(δ, z) as already mentioned in Theorem. If δ > n+, then q n n(δ; ) = 0, q n(δ; ) > 0, and q n (δ; 0) = > 0 imply that q n (δ; z) must have an even number ν, ν N 0, of roots in (, 0) counting multiplicities. Since q n (δ; ) is self-inversive, it must have the same number of roots in (, ), so that we obtain 4ν + real roots overall. But we already found n non-real roots of q n (δ; ) on the complex unit circle, whence ν = 0. Since cos n θ does not have a zero in ( n π, π), the function f(θ) (cf. (68)) is well defined and continuous and can n be continuously extended to the boundaries using L Hospital s rule at the 6

27 right boundary: ( ) n f π n = and f(π) := lim cos (n+)θ θ π cos (n )θ (n + ) sin (n+)θ = lim θ π (n ) sin (n )θ = n + n. Thus, δ ( n+ n, ) f(( π, π)) finally supplies the remaining root n n θ m ( n π, π) with f(θ n m) = δ, that is Q n (δ, θ m ) = 0 proving P8). In all inclusions given in P5) to P8) we found that the θ = θ (δ) are implicitly defined functions on the given intervals by the equation f(θ (δ)) = δ. Building derivatives on both sides yields = n+ sin (n+)θ (δ) cos (n )θ (δ) + n cos (n+)θ (δ) sin (n )θ (δ) 0 < cos (n )θ (δ) = = cos (n )θ (δ) ( n + n ( n + sin (n + )θ (δ) θ (δ) cos (n )θ (δ). sin (n + )θ (δ) cos (n + )θ (δ) + n δ sin (n )θ (δ) cos (n )θ (δ) + sin (n )θ ) (δ) θ (δ) ) θ (δ) This shows θ (δ) 0 in the given open intervals and also proves (). Hence, θ (δ) depends monotonically on δ. From P) to P4) where the boundary values of δ were considered, we see that the real θ (δ) and the real z = z (δ) [case P7)], z n m = z n m (δ) [case P8)] are monotonically decreasing functions of δ. Finally, we prove (). For δ close to but distinct from we compute using the chain rule and (): (cos θ (δ)) = θ (δ) sin θ (δ) = sin θ (δ) cos (n )θ (δ) n+ sin (n+)θ (δ) + n δ sin (n )θ (δ) Since lim δ θ (δ) = 0, both the numerator and denominator of the above fraction converge to 0 for δ so that we can apply L Hospital s rule to. 7

28 compute the limit: ( = ( n+ sin θ (δ) cos (n )θ (δ) ) sin (n+)θ (δ) + n δ sin (n )θ (δ) ( ) cos θ (δ) cos (n )θ (δ) + n sin θ (δ) sin (n )θ (δ) θ (δ) ( ( n+ ) cos (n+)θ (δ) + ( n ) δ cos (n )θ (δ) + n sin (n )θ (δ) θ (δ) ) ) θ (δ). to Since lim δ θ (δ) = + by (), the above fraction converges for δ proving (). Finally, we prove P7 ) and P8 ). Set ( n+ ) +( n ) = n q(z) := z n+ δz n δz +. P7 ) Since δ >, Cauchy s bound for the moduli of the roots of q(z) is + δ. Hence, q(δ) = δ < 0 and lim x q(x) = + imply δ < ζ δ + which proves P7 ) if ρ =. If ρ = δ <, then q(δ + ρ) = ρ(δ + ρ) n δ(δ + ρ) + ρ(δ + ρ) δ(δ + ρ) + = 0 also shows δ < ζ δ + ρ. P8 ) If n is odd, then p n (δ; z) = z n+ δ z n δ z + = p n ( δ ; z) and the assertion follows from P7 ). Now suppose that n 3 is even. If δ <, then ρ =, δ + ρ = δ >, and q(δ + ρ) = (δ + ) n δ(δ + ) + > ( δ ) 3 δ(δ + ) + = δ 3 4 δ + 4 δ + 0 > δ = q(δ). This implies ζ (δ, δ + ρ) = (δ, δ + ). If δ < n+, then ρ = δ + = δ, δ + ρ =, and for n n p(z) := p n (δ; z) = q(z)/(z + ) = z n + ( z) j + we have p(δ + ρ) = p( ) = + (n )( + δ) < 0 < p(δ) = (δ )/(δ + ). Again, this implies ζ (δ, δ + ρ) = (δ, ). 8 j=

29 Acnowledgment I than Professor A. Böttcher from Chemnitz University of Technology for valuable hints and comments that led to improvements of this paper. References [] M. Bogoya, A. Böttcher, and S. Grudsy, Eigenvalues of Hermitian Toeplitz matrices with polynomially increasing entries. Journal of Spectral Theory (0), [] A. Cantoni, P. Butler, Eigenvalues and eigenvectors of symmetric centrosymmetric matrices. Linear Algebra Appl. 3 (976), [3] C.M. da Fonseca, The Characteristic Polynomial of Some Perturbed Tridiagonal -Toeplitz Matrices. Applied Mathematical Sciences () (007), [4] P. Delsarte, Y. Genin, Spectral Properties of Finite Toeplitz Matrices. Springer Lecture Notes in Control and Information Sciences 58 (984), [5] P. Delsarte, Y. Genin, Y. Kamp, Parametric Toeplitz Systems. Circuits Systems and Signal Processing () 3 (984), [6] R.T. Gregory, D.L. Karney, A Collection of Matrices for Testing Computational Algorithms. Wiley-Interscience, New Yor/London/Sydney/ Toronto 969. [7] G.H. Golub, C.F. Van Loan, Matrix Computations. 3rd ed., The Johns Hopins University Press, Baltimore and London 996. [8] I.S. Iohvidov, Hanel and Toeplitz Matrices and Forms. Birhäuser, 98. [9] A.K. Jain, A Sinusoidal Family of Unitary Transforms. IEEE Trans. Pattern Anal. Mach. Intell. (4) PAMI- (979), [0] S. Kouachi, Eigenvalues and eigenvectors of tridiagonal matrices. Electronic Journal of Linear Algebra 5 (006),

30 [] M.H. Lietze, R.W. Stoughton and M. P. Lietze, A Comparison of Several Methods for Inverting Large Symmetric Positive Definite Matrices. Mathematics of Computation 8 (964), [] L. Rehnquist, Inversion of certain symmetric band matrices. BIT Numerical Mathematics () (97), [3] J. Todd, The Problem of Error in Digital Computation, In Error in Digital Computation, vol. I, L.B. Rall, ed., John Wiley & Sons, Inc., 965, 3 4. [4] W.F. Trench, Spectral distribution of generalized KacMurdocSzeg matrices. Linear Algebra and its Applications 347 (00), [5] W.F. Trench, Characterization and properties of matrices with generalized symmetry or sew symmetry. Linear Algebra Appl. 377 (004), [6] J. Westlae, A handboo of numerical matrix inversion and solution of linear equations. John Wiley & Sons, Inc., 968. [7] A.R. Willms, Analytic Results for the Eigenvalues of Certain Tridiagonal Matrices. SIAM J. Matrix Anal. Appl. () 30 (008), [8] Wen-Chyan Yueh, Eigenvalues of several tridiagonal matrices. Applied Mathematics E-Notes 5 (005), [9] Wen-Chyan Yueh and Sui Sun Cheng, Explicit eigenvalues and inverses of tridiagonal Toeplitz matrices with four perturbed corners. ANZIAM J. 49 (008),

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Orthogonal Symmetric Toeplitz Matrices

Orthogonal Symmetric Toeplitz Matrices Orthogonal Symmetric Toeplitz Matrices Albrecht Böttcher In Memory of Georgii Litvinchuk (1931-2006 Abstract We show that the number of orthogonal and symmetric Toeplitz matrices of a given order is finite

More information

Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices

Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hanel Matrices Georg Heinig Dept of Math& CompSci Kuwait University POBox 5969 Safat 13060, Kuwait Karla Rost Faultät für Mathemati

More information

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES Indian J Pure Appl Math, 49: 87-98, March 08 c Indian National Science Academy DOI: 0007/s36-08-056-9 INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES João Lita da Silva Department of Mathematics and

More information

THE SPECTRUM OF THE LAPLACIAN MATRIX OF A BALANCED 2 p -ARY TREE

THE SPECTRUM OF THE LAPLACIAN MATRIX OF A BALANCED 2 p -ARY TREE Proyecciones Vol 3, N o, pp 131-149, August 004 Universidad Católica del Norte Antofagasta - Chile THE SPECTRUM OF THE LAPLACIAN MATRIX OF A BALANCED p -ARY TREE OSCAR ROJO Universidad Católica del Norte,

More information

The spectral decomposition of near-toeplitz tridiagonal matrices

The spectral decomposition of near-toeplitz tridiagonal matrices Issue 4, Volume 7, 2013 115 The spectral decomposition of near-toeplitz tridiagonal matrices Nuo Shen, Zhaolin Jiang and Juan Li Abstract Some properties of near-toeplitz tridiagonal matrices with specific

More information

Tridiagonal test matrices for eigenvalue computations: two-parameter extensions of the Clement matrix

Tridiagonal test matrices for eigenvalue computations: two-parameter extensions of the Clement matrix Tridiagonal test matrices for eigenvalue computations: two-parameter extensions of the Clement matrix Roy Oste a,, Joris Van der Jeugt a a Department of Applied Mathematics, Computer Science and Statistics,

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Inverse Eigenvalue Problems for Checkerboard Toeplitz Matrices

Inverse Eigenvalue Problems for Checkerboard Toeplitz Matrices Inverse Eigenvalue Problems for Checkerboard Toeplitz Matrices T. H. Jones 1 and N. B. Willms 1 1 Department of Mathematics, Bishop s University. Sherbrooke, Québec, Canada, J1M 1Z7. E-mail: tjones@ubishops.ca,

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

TRUNCATED TOEPLITZ OPERATORS ON FINITE DIMENSIONAL SPACES

TRUNCATED TOEPLITZ OPERATORS ON FINITE DIMENSIONAL SPACES TRUNCATED TOEPLITZ OPERATORS ON FINITE DIMENSIONAL SPACES JOSEPH A. CIMA, WILLIAM T. ROSS, AND WARREN R. WOGEN Abstract. In this paper, we study the matrix representations of compressions of Toeplitz operators

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

Eigenvalues and eigenvectors of tridiagonal matrices

Eigenvalues and eigenvectors of tridiagonal matrices Electronic Journal of Linear Algebra Volume 15 Volume 15 006) Article 8 006 Eigenvalues and eigenvectors of tridiagonal matrices Said Kouachi kouachisaid@caramailcom Follow this and additional works at:

More information

On the spectra of striped sign patterns

On the spectra of striped sign patterns On the spectra of striped sign patterns J J McDonald D D Olesky 2 M J Tsatsomeros P van den Driessche 3 June 7, 22 Abstract Sign patterns consisting of some positive and some negative columns, with at

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Iyad T. Abu-Jeib and Thomas S. Shores

Iyad T. Abu-Jeib and Thomas S. Shores NEW ZEALAND JOURNAL OF MATHEMATICS Volume 3 (003, 9 ON PROPERTIES OF MATRIX I ( OF SINC METHODS Iyad T. Abu-Jeib and Thomas S. Shores (Received February 00 Abstract. In this paper, we study the determinant

More information

Error estimates for the ESPRIT algorithm

Error estimates for the ESPRIT algorithm Error estimates for the ESPRIT algorithm Daniel Potts Manfred Tasche Let z j := e f j j = 1,..., M) with f j [ ϕ, 0] + i [ π, π) and small ϕ 0 be distinct nodes. With complex coefficients c j 0, we consider

More information

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs

On the Eigenstructure of Hermitian Toeplitz Matrices with Prescribed Eigenpairs The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp 298 305 On the Eigenstructure of Hermitian

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Numerical solution of the inverse eigenvalue problem for real symmetric Toeplitz matrices

Numerical solution of the inverse eigenvalue problem for real symmetric Toeplitz matrices Trinity University From the SelectedWorks of William F. Trench 1997 Numerical solution of the inverse eigenvalue problem for real symmetric Toeplitz matrices William F. Trench, Trinity University Available

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play? Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

ALGORITHMS FOR CENTROSYMMETRIC AND SKEW-CENTROSYMMETRIC MATRICES. Iyad T. Abu-Jeib

ALGORITHMS FOR CENTROSYMMETRIC AND SKEW-CENTROSYMMETRIC MATRICES. Iyad T. Abu-Jeib ALGORITHMS FOR CENTROSYMMETRIC AND SKEW-CENTROSYMMETRIC MATRICES Iyad T. Abu-Jeib Abstract. We present a simple algorithm that reduces the time complexity of solving the linear system Gx = b where G is

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Solutions Preliminary Examination in Numerical Analysis January, 2017

Solutions Preliminary Examination in Numerical Analysis January, 2017 Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Pascal Eigenspaces and Invariant Sequences of the First or Second Kind

Pascal Eigenspaces and Invariant Sequences of the First or Second Kind Pascal Eigenspaces and Invariant Sequences of the First or Second Kind I-Pyo Kim a,, Michael J Tsatsomeros b a Department of Mathematics Education, Daegu University, Gyeongbu, 38453, Republic of Korea

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

NUMERICAL SOLUTION OF THE EIGENVALUE PROBLEM FOR HERMITIAN TOEPLITZ MATRICES. William F. Trench* SIAM J. Matrix Anal. Appl.

NUMERICAL SOLUTION OF THE EIGENVALUE PROBLEM FOR HERMITIAN TOEPLITZ MATRICES. William F. Trench* SIAM J. Matrix Anal. Appl. NUMERICAL SOLUTION OF THE EIGENVALUE PROBLEM FOR HERMITIAN TOEPLITZ MATRICES William F. Trench* SIAM J. Matrix Anal. Appl. 10 (1989) 135-156 Abstract. An iterative procedure is proposed for computing the

More information

arxiv: v2 [math.sp] 3 Sep 2015

arxiv: v2 [math.sp] 3 Sep 2015 Symmetries of the Feinberg-Zee Random Hopping Matrix Raffael Hagger November 6, 7 arxiv:4937v [mathsp] 3 Sep 5 Abstract We study the symmetries of the spectrum of the Feinberg-Zee Random Hopping Matrix

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C : TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATH 117 LECTURE NOTES

MATH 117 LECTURE NOTES MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

3 + 4i 2 + 3i. 3 4i Fig 1b

3 + 4i 2 + 3i. 3 4i Fig 1b The introduction of complex numbers in the 16th century was a natural step in a sequence of extensions of the positive integers, starting with the introduction of negative numbers (to solve equations of

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Quadrature for the Finite Free Convolution

Quadrature for the Finite Free Convolution Spectral Graph Theory Lecture 23 Quadrature for the Finite Free Convolution Daniel A. Spielman November 30, 205 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Toeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References

Toeplitz matrices. Niranjan U N. May 12, NITK, Surathkal. Definition Toeplitz theory Computational aspects References Toeplitz matrices Niranjan U N NITK, Surathkal May 12, 2010 Niranjan U N (NITK, Surathkal) Linear Algebra May 12, 2010 1 / 15 1 Definition Toeplitz matrix Circulant matrix 2 Toeplitz theory Boundedness

More information

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1. MTH4101 CALCULUS II REVISION NOTES 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) 1.1 Introduction Types of numbers (natural, integers, rationals, reals) The need to solve quadratic equations:

More information

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

More information

Eigenvalues, random walks and Ramanujan graphs

Eigenvalues, random walks and Ramanujan graphs Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,

More information

Mathematical Foundations

Mathematical Foundations Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

Finding eigenvalues for matrices acting on subspaces

Finding eigenvalues for matrices acting on subspaces Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department

More information

LINEAR FORMS IN LOGARITHMS

LINEAR FORMS IN LOGARITHMS LINEAR FORMS IN LOGARITHMS JAN-HENDRIK EVERTSE April 2011 Literature: T.N. Shorey, R. Tijdeman, Exponential Diophantine equations, Cambridge University Press, 1986; reprinted 2008. 1. Linear forms in logarithms

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

N-WEAKLY SUPERCYCLIC MATRICES

N-WEAKLY SUPERCYCLIC MATRICES N-WEAKLY SUPERCYCLIC MATRICES NATHAN S. FELDMAN Abstract. We define an operator to n-weakly hypercyclic if it has an orbit that has a dense projection onto every n-dimensional subspace. Similarly, an operator

More information

Chapter 2 Spectra of Finite Graphs

Chapter 2 Spectra of Finite Graphs Chapter 2 Spectra of Finite Graphs 2.1 Characteristic Polynomials Let G = (V, E) be a finite graph on n = V vertices. Numbering the vertices, we write down its adjacency matrix in an explicit form of n

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Sign changes of Fourier coefficients of cusp forms supported on prime power indices

Sign changes of Fourier coefficients of cusp forms supported on prime power indices Sign changes of Fourier coefficients of cusp forms supported on prime power indices Winfried Kohnen Mathematisches Institut Universität Heidelberg D-69120 Heidelberg, Germany E-mail: winfried@mathi.uni-heidelberg.de

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Chapter 2 Formulas and Definitions:

Chapter 2 Formulas and Definitions: Chapter 2 Formulas and Definitions: (from 2.1) Definition of Polynomial Function: Let n be a nonnegative integer and let a n,a n 1,...,a 2,a 1,a 0 be real numbers with a n 0. The function given by f (x)

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

AN EXAMPLE OF SPECTRAL PHASE TRANSITION PHENOMENON IN A CLASS OF JACOBI MATRICES WITH PERIODICALLY MODULATED WEIGHTS

AN EXAMPLE OF SPECTRAL PHASE TRANSITION PHENOMENON IN A CLASS OF JACOBI MATRICES WITH PERIODICALLY MODULATED WEIGHTS AN EXAMPLE OF SPECTRAL PHASE TRANSITION PHENOMENON IN A CLASS OF JACOBI MATRICES WITH PERIODICALLY MODULATED WEIGHTS SERGEY SIMONOV Abstract We consider self-adjoint unbounded Jacobi matrices with diagonal

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Solutions for Chapter 3

Solutions for Chapter 3 Solutions for Chapter Solutions for exercises in section 0 a X b x, y 6, and z 0 a Neither b Sew symmetric c Symmetric d Neither The zero matrix trivially satisfies all conditions, and it is the only possible

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3]. Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations FFTs in Graphics and Vision Homogenous Polynomials and Irreducible Representations 1 Outline The 2π Term in Assignment 1 Homogenous Polynomials Representations of Functions on the Unit-Circle Sub-Representations

More information

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker Archive of past papers, solutions and homeworks for MATH 224, Linear Algebra 2, Spring 213, Laurence Barker version: 4 June 213 Source file: archfall99.tex page 2: Homeworks. page 3: Quizzes. page 4: Midterm

More information