The inverse of a tridiagonal matrix

Size: px
Start display at page:

Download "The inverse of a tridiagonal matrix"

Transcription

1 Linear Algebra and its Applications 325 (2001) The inverse of a tridiagonal matrix Ranjan K. Mallik Department of Electrical Engineering, Indian Institute of Technology, Hauz Khas, New Delhi , India Received 6 March 1998; accepted 1 September 2000 Submitted by R.A. Brualdi Abstract In this paper, explicit formulae for the elements of the inverse of a general tridiagonal matrix are presented by first extending results on the explicit solution of a second-order linear homogeneous difference equation with variable coefficients to the nonhomogeneous case, and then applying these extended results to a boundary value problem. A formula for the characteristic polynomial is obtained in the process. We also establish a connection between the matrix inverse and orthogonal polynomials. In addition, the case of a cyclic tridiagonal system is discussed Elsevier Science Inc. All rights reserved. AMS classification: 15A09; 39A10 Keywords: Tridiagonal matrix; Inverse; Second-order linear difference equation; Variable coefficients; Explicit solution; Orthogonal polynomials 1. Introduction Tridiagonal matrices [1 4] are connected with different areas of science and engineering, including telecommunication system analysis [5] and finite difference methods for solving partial differential equations [4,6,7]. In many of these areas, inversions of tridiagonal matrices are necessary. Efficient algorithms [8], indirect formulae [1,9 12], and direct expressions in some special cases [4,7] for such inversions are known. Bounds on the elements of the inverses of diagonally dominant tridiagonal matrices have also been obtained [13]. However, explicit formulae for the Tel.: ; fax: address: rkmallik@ee.iitd.ernet.in (R.K. Mallik) /01/$ - see front matter 2001 Elsevier Science Inc. All rights reserved. PII:S (00)

2 110 R.K. Mallik / Linear Algebra and its Applications 325 (2001) elements of a general tridiagonal matrix inverse, which can give a better analytical treatment to a problem, are not available in the open literature [1]. In this paper, we present explicit formulae for the elements of the inverse of a general tridiagonal matrix. The approach is based on linear difference equations [14,15], and is as follows. Results on the explicit solution of a second-order linear homogeneous difference equation with variable coefficients [16] are extended to the nonhomogeneous case. Then these extended results are applied to a boundary value problem to obtain the desired formulae. A formula for the characteristic polynomial is obtained in the process. We also establish a connection between the matrix inverse and orthogonal polynomials. In addition, the case of a cyclic tridiagonal system is discussed. 2. Results on the second-order linear homogeneous difference equation with variable coefficients In this section, we will review some results on the second-order linear homogeneous difference equation with variable coefficients [16]. Let N denote the set of natural numbers. A set S q (L, U), whereq,l,u N, has been defined in [16] as the set of all q-tuples with elements from {L, L + 1,...,U} arranged in ascending order so that no two consecutive elements are present,thatis, S q (L, U) {L, L + 1,...,U} if U L and q = 1, (1a) { (k 1,...,k q ): k 1,...,k q {L, L + 1,...,U}; k l k l 1 2 for l = 2,...,q } U L + 2 if U L + 2and2 q, 2 (1b) otherwise. (1c) For example, S 1 (2, 6) ={2, 3, 4, 5, 6}, S 2 (2, 6) ={(2, 4), (2, 5), (2, 6), (3, 5), (3, 6), (4, 6)}, S 3 (2, 6) ={(2, 4, 6)}, S 4 (2, 6) =. The following results have been proved in [16]: For U L,1 q (U L + 2)/2, ( ) U L q + 2 (U L q + 2)! S q (L, U) = = q q!(u L 2q + 2)!. (2) For U L, q 2,

3 R.K. Mallik / Linear Algebra and its Applications 325 (2001) S q (L, U + 1)=S q (L, U) { (k 1,...,k q 1,k q ): k q = U + 1; (k 1,...,k q 1 ) S q 1 (L, U 1) }. (3) We add a proposition which is similar to result (3). Proposition 1. For U L, q 2, S q (L 1,U)=S q (L, U) { (k 1,k 2,...,k q ): k 1 = L 1; (k 2,...,k q ) S q 1 (L + 1,U) }. (4) Proof. If either U = L, q 2orU L + 1, q> (U L + 3)/2, then Eq. (4) holds trivially. For U L + 1, 2 q (U L + 3)/2, we get, from (1a) (1c), S q (L 1,U)= { (k 1,...,k q ): k 1,...,k q {L 1,L,...,U}; k l k l 1 2forl = 2,...,q }. (5) The right-hand side of (5) can be expressed as the union of two disjoint sets, one containing q-tuples with k 1 /= L 1 and the other containing q-tuples with k 1 = L 1, which we denote as S q0 (L 1,U)and S q1 (L 1,U), respectively. Therefore, where S q (L 1,U)= S q0 (L 1,U) S q1 (L 1,U), (6) S q0 (L 1,U)= { (k 1,...,k q ): k 1,...,k q {L, L + 1,...,U}; k l k l 1 2forl = 2,...,q }, S q1 (L 1,U)= { (k 1,...,k q ): k 1 = L 1; k 2,...,k q {L, L + 1,...,U}; k l k l 1 2forl = 2,...,q }. (7) It is clear from (1a) (1c) that S q0 (L 1,U)= S q (L, U). (8) In the set S q1 (L 1,U),sincek 1 = L 1, we have k 2 (L 1) 2, which implies that k 2,...,k q {L + 1,L+ 2,...,U}. Therefore, S q1 (L 1,U)= { (k 1,k 2,...,k q ): k 1 = L 1; (k 2,...,k q ) S q 1 (L + 1,U) }. (9) Combining Eqs. (6), (8) and (9), we get (4).

4 112 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Consider the second-order linear homogeneous difference equation y n+2 = A n y n+1 + B n y n, n 1, (10) with integral index n, variable complex coefficients A n and B n, B n /= 0, and complex initial values y 1,y 2.Define,fork 2, σ k B k. (11) A k 1 A k It has been shown in [16] that the explicit solution of difference equation (10), which is an expression for y n+2 in terms of only coefficients A 1,...,A n, B 1,...,B n,and initial values y 1,y 2,isgivenby where y n+2 = C n y 2 + D n y 1, n 0, (12) C 0 = 1, C 1 = A 1, (13a) D 0 = 0, D 1 = B 1, D 2 = B 1 A 2 (13b) and C n = (A 1 A n ) 1 + n/2 (σ k1 σ kq ) (k 1,...,k q ) S q (2,n) for n 2, (14a) D n = B 1 (A 2 A n ) 1 + (n 1)/2 (k 1,...,k q ) S q (3,n) (σ k1 σ kq ) for n 3, (14b) where S q (L, U) is defined by (1a) (1c), and σ k is defined by (11). 3. Solution of the second-order linear nonhomogeneous difference equation with variable coefficients We now focus on the second-order linear nonhomogeneous difference equation y n+2 = A n y n+1 + B n y n + x n+2, n 1, (15) with integral index n, variable complex coefficients A n and B n, B n /= 0, complex forcing term x n+2, and complex initial values y 1,y 2.

5 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Let σ k = B k /(A k 1 A k ) for k 2 as in (11). For n 0, i 1, define a quantity E n (i), which is a function of the A k sandtheσ k s, as (n i+1)/2 E n (i) (A i A n ) 1 + (σ k1 σ kq ) (k 1,...,k q ) S q (i+1,n) if i = 1,...,n 1, n 2, A n if i = n, n 1, 1 if i = n + 1, n 0, 0 otherwise. (16) Therefore, E n (i) is nontrivial only for n i 1, i 1. Note that E n (1) = C n for n 0, (17) where C n is given by (13a) and (14a). Two recurrences for E n (i) are established by the following two propositions. Proposition 2. For n i 1,i 1, E n+2 (i) = A n+2 E n+1 (i) + B n+2 E n (i). (18) Proof. Using the definition of E n (i) in (16) and σ i in (11), the right-hand side of (18) can be written for n = i 1, i 1as A i+1 E i (i) + B i+1 E i 1 (i)=a i+1 A i + B i+1 =(A i A i+1 )(1 + σ i+1 ) =E i+1 (i) (19) and for n = i, i 1as A i+2 E i+1 (i) + B i+2 E i (i)=(a i+2 A i+1 A i )(1 + σ i+1 ) + B i+2 A i =(A i A i+1 A i+2 )(1 + σ i+1 + σ i+2 ) =E i+2 (i), (20) which imply that (18) holds for n = i 1,i, i 1. Now consider the case when n i + 1, i 1. Again, using (16) and noting the fact that B n+2 = A n+1 A n+2 σ n+2,weget A n+2 E n+1 (i) + B n+2 E n (i) ( = (A i A n+2 ) 1 + (n i+2)/2 (k 1,...,k q ) S q (i+1,n+1) (σ k1 σ kq )

6 114 R.K. Mallik / Linear Algebra and its Applications 325 (2001) (n i+3)/2 + σ n+2 + q=2 ( = (A i A n+2 ) (n i+2)/2 q=2 (n i+3)/2 q=2 (k 1,...,k q 1 ) S q 1 (i+1,n) n+2 k 1 =i+1 σ k1 (k 1,...,k q ) S q (i+1,n+1) (k 1,...,k q 1 ) S q 1 (i+1,n) (σ k1 σ kq ) (σ k1 σ kq 1 )σ n+2 ) (σ k1 σ kq 1 )σ n+2 ). (21) Now n i + 2 n i + 3 if (n i + 2) is even, 2 = 2 n i if(n i + 2) is odd. 2 If (n i + 2) is odd, by (1a) (1c), (22) S (n i+3)/2 (i + 1,n+ 1) = S (n i+2)/2 +1 (i + 1,n+ 1) =. Then, using (3), we get S q (i + 1,n+ 2)= S q (i + 1,n+ 1) { (k 1,...,k q ): k q = n + 2; (k 1,...,k q 1 ) S q 1 (i + 1,n) }, (23) for q = 2,..., (n i + 3)/2. Putting together the summation terms of (21) using (22) and (23), we obtain A n+2 E n+1 (i) + B n+2 E n (i) ( = (A i A n+2 ) 1 + = E n+2 (i) (n i+3)/2 (k 1,...,k q ) S q (i+1,n+2) (σ k1 σ kq ) ) (24) for n i + 1, i 1. A combination of (19), (20) and (24) yields (18). Proposition 3. For 1 i n 1,n 2, E n (i) = A i E n (i + 1) + B i+1 E n (i + 2). (25)

7 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Proof. From the definition of E n (i) in (16), the right-hand side of (25) can be written for i = n 1as A n 1 E n (n) + B n E n (n + 1)=A n 1 A n + B n =(A n 1 A n )(1 + σ n ) =E n (n 1) (26) and for i = n 2as A n 2 E n (n 1) + B n 1 E n (n)=(a n 2 A n 1 A n )(1 + σ n ) + B n 1 A n =(A n 2 A n 1 A n )(1 + σ n + σ n 1 ) =E n (n 2), (27) which imply that (25) holds for i = n 2,n 1. Now consider the case when 1 i n 3. Using (16) and (11) we obtain A i E n (i + 1) + B i+1 E n (i + 2) ( (n i)/2 = (A i A n ) 1 + ( = (A i A n ) 1 + (n i+1)/2 + q=2 n k 1 =i+1 (n i+1)/2 + q=2 (k 1,...,k q ) S q (i+2,n) (k 2,...,k q ) S q 1 (i+3,n) (n i)/2 σ k1 + q=2 (k 2,...,k q ) S q 1 (i+3,n) (σ k1 σ kq ) + σ i+1 σ i+1 (σ k2 σ kq ) (k 1,...,k q ) S q (i+2,n) ) (σ k1 σ kq ) ) σ i+1 (σ k2 σ kq ). (28) Now n i n i + 1 if (n i) is even, 2 = 2 n i + 1 if(n i) is odd. 2 If (n i) is odd, then by (1a) (1c), S (n i+1)/2 (i + 2,n)= S (n i)/2 +1 (i + 2,n)=. Then, using Proposition 1, we get (29) S q (i + 1,n)=S q (i + 2,n) { (k 1,...,k q ): k 1 = i + 1; (k 2,...,k q ) S q 1 (i + 3,n) }, (30)

8 116 R.K. Mallik / Linear Algebra and its Applications 325 (2001) for q = 2,..., (n i + 1)/2. Putting together the summation terms of (28) using (29) and (30), we obtain (25). The following proposition provides the explicit solution of the difference equation (15), which consists of an expression for y n+2 in terms of only coefficients A 1,...,A n, B 1,...,B n, initial values y 1,y 2, and forcing terms x 3,...,x n+2. Proposition 4. The solution of difference equation (15) with initial values y 1,y 2 is given by n+2 y n+2 = C n y 2 + D n y 1 + E n (k 1)x k, n 1, (31) k=3 where E n (k) is defined by (16), and C n and D n by (13a), (13b), (14a), and (14b). Proof. The solution of (15) with initial values y 1,y 2 can be expressed as y n+2 = H n + P n, n 1, (32) where H n is the solution of the homogeneous equation, andp n the particular solution. From the results of Section 2 (Eqs. (10) and (12)), we have H n = C n y 2 + D n y 1, n 1. (33) It is also known that the particular solution P n satisfies the recurrence P n+2 = A n+2 P n+1 + B n+2 P n + x n+4, n 1. (34) Using (15), (13a), (13b), (14a) and (14b), we get y 3 = C 1 y 2 + D 1 y 1 + x 3, y 4 = C 2 y 2 + D 2 y 1 + (x 4 + A 2 x 3 ), where C 1 = A 1, C 2 = A 1 A 2 + B 2, D 1 = B 1, D 2 = B 1 A 2, which in turn implies that (35) (36) P 1 = x 3, P 2 = x 4 + A 2 x 3. (37) We now proceed to prove the validity of the expression n+2 P n = E n (k 1)x k (38) k=3 for n 1. It is clear from (37) and the definition of E n (k) in (16) that expression (38) holds for n = 1, 2. Let (38) hold for P n and P n+1 for n 1. Then, from (34), Proposition 2 and (16), we get

9 R.K. Mallik / Linear Algebra and its Applications 325 (2001) n+3 n+2 P n+2 =A n+2 E n+1 (k 1)x k + B n+2 E n (k 1)x k + x n+4 n+2 = k=3 n+2 = k=3 k=3 k=3 [A n+2 E n+1 (k 1) + B n+2 E n (k 1)]x k + A n+2 x n+3 + x n+4 E n+2 (k 1)x k + E n+2 (n + 2)x n+3 + E n+2 (n + 3)x n+4. (39) Thus, n+4 P n+2 = E n+2 (k 1)x k, (40) k=3 which agrees with (38). By mathematical induction, expression (38) is valid for all n 1. From (32), (33) and (38), we obtain (31) An example Consider the case when we need to find y 6 in the difference equation (15) in terms of coefficients A 1,A 2,A 3,A 4,B 1,B 2,B 3,B 4, initial values y 1,y 2, and forcing terms x 3,x 4,x 5,x 6. From Proposition 4, we have y 6 = C 4 y 2 + D 4 y E 4 (k 1)x k. k=3 Eqs. (14a) and (14b) imply that ( C 4 = (A 1 A 2 A 3 A 4 ) 1 + k 1 S 1 (2,4) ( D 4 = (B 1 A 2 A 3 A 4 ) 1 + k 1 S 1 (3,4) σ k1 + σ k1 ). (k 1,k 2 ) S 2 (2,4) Now, from the definition of S q (L, U) in (1a) (1c), we get S 1 (2, 4) ={2, 3, 4}, S 2 (2, 4) ={(2, 4)}, S 1 (3, 4) ={3, 4}. ) (σ k1 σ k2 ),

10 118 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Therefore, C 4 = (A 1 A 2 A 3 A 4 )(1 + σ 2 + σ 3 + σ 4 + σ 2 σ 4 ), D 4 = (B 1 A 2 A 3 A 4 )(1 + σ 3 + σ 4 ), where, from (11), σ 2 = B 2, σ 3 = B 3, σ 4 = B 3. A 1 A 2 A 2 A 3 A 3 A 4 In addition, from (16), we obtain E 4 (2) = (A 2 A 3 A 4 )(1 + σ 3 + σ 4 ), E 4 (3) = (A 3 A 4 )(1 + σ 4 ), E 4 (4) = A 4, E 4 (5) = 1. Therefore, by substituting the expressions for σ 2, σ 3 and σ 4 in C 4, D 4, E 4 (2), E 4 (3), E 4 (4), E 4 (5), we finally have the desired solution, which is y 6 =(A 1 A 2 A 3 A 4 + B 2 A 3 A 4 + B 3 A 1 A 4 + B 4 A 1 A 2 + B 2 B 4 )y 2 + (B 1 A 2 A 3 A 4 + B 1 B 3 A 4 + B 1 B 4 A 2 )y 1 + (A 2 A 3 A 4 + B 3 A 4 + B 4 A 2 )x 3 + (A 3 A 4 + B 4 )x 4 + A 4 x 5 + x A boundary value problem and the tridiagonal matrix inverse Consider the difference equation (15) over the finite index interval [1,K + 2], K 2, with zero boundary values, that is, the recurrence y n+2 = A n y n+1 + B n y n + x n+2, 1 n K, (41) with boundary values y 1,y K+2 such that y 1 = y K+2 = 0. (42) The K equations in (41) can be written in matrix form as B 1 A 1 1 B 2 A B K 1 A K 1 1 B K A K 1 y 1 y 2. y K+1 y K+2

11 = R.K. Mallik / Linear Algebra and its Applications 325 (2001) x 3... (43) x K+2 Using boundary conditions (42), the solution of the difference equation (41) can be expressed as 1 A 1 1 y 2 B 2 A x 3. = (44) y K+1 0 B K 1 A K 1 1 x K+2 B K A K However, from Proposition 4, the quantities y 3,...,y K+2 of recurrence (41) can be expressed in terms of y 1,y 2 as i 1 y i+1 = C i 1 y 2 + D i 1 y 1 + E i 1 (j + 1)x j+2, 2 i K + 1. (45) j=1 As a result of boundary conditions (42), substitution of i = K + 1 in (45) gives K y K+2 = C K y 2 + E K (j + 1)x j+2 = 0, (46) j=1 and therefore, when C K /= 0, K [ y 2 = E ] K(j + 1) x j+2. (47) C K j=1 Substituting (47) in (45) and setting y 1 = 0 we obtain y i+1 = K j=1 i 1 C i 1 E K (j + 1) C K x j+2 + E i 1 (j + 1)x j+2, 2 i K, C K /= 0. (48) j=1 j=1 In other words, i 1 [ y i+1 = E i 1 (j + 1) C ] i 1E K (j + 1) x j+2 C K + K j=i [ C ] i 1E K (j + 1) x j+2, 2 i K, C K /= 0. (49) C K

12 120 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Using (16) and (17), Eqs. (47) and (49) can be written together as y i+1 = K [ j=1 E i 1 (j + 1) E i 1(1)E K (j + 1) E K (1) ] x j+2, 1 i K, E K (1) /= 0. (50) Thus the solution of boundary value problem (41), (42) is given by (47) and (49) or alternatively by (50). This solution also gives the inverse of a tridiagonal matrix. Denoting the K K tridiagonal matrix derived from the K (K + 2) matrix on theleft-handside of (43) (byremovingthefirst and (K + 2)th columns) as A 1 1 B 2 A T K = , K 2, (51) 0 B K 1 A K 1 1 B K A K and its inverse as R K = TK 1 = [ ] K R i,j i,j=1, (52) we obtain, from (44), (47) and (49), an expression for R i,j, the element in the ith row and jth column of T 1 K.Itisgivenby R i,j =E i 1 (j + 1) C i 1E K (j + 1) C K for j = 1,...,i 1, i= 2,...,K, = C i 1E K (j + 1) C K for j = i,...,k, i = 1,...,K, C K /= 0, (53) where E i (j) is defined in (16), and C i in (13a) and (14a). Alternatively, from (44) and (50), we get R i,j = E i 1 (j + 1) E i 1(1)E K (j + 1) E K (1) for i, j = 1,...,K, E K (1) /= 0. (54) The following proposition gives an expression for the determinant of the tridiagonal matrix T K. Proposition 5. For K 2, det(t K ) = ( 1) K C K, (55) where T K is given by (51) and C K by (14a).

13 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Proof. From (51), it is clear that if we find det(t K ) by using the Kth row of T K followed by its Kth column, we obtain det(t K ) = A K det(t K 1 ) + B K det(t K 2 ), K 4. (56) It is simple to show that ([ A1 det(t 2 ) = det B 2 ]) 1 = A A 1 A 2 (1 + σ 2 ) = C 2, 2 (57a) A det(t 3 )=det B 2 A B 3 A 3 = A 1 A 2 A 3 (1 + σ 2 + σ 3 ) = C 3. (57b) Since C K = E K (1), it is governed by the second-order recursion C K = A K C K 1 + B K C K 2, K 4 (58) by Proposition 2. Therefore, ( 1) K C K = A K ( 1) K 1 C K 1 + B K ( 1) K 2 C K 2, K 4. (59) Recursions (56) and (59) for det(t K ) and ( 1) K C K, respectively, are identical, and from (57a) and (57b) we find that they have the same initial values ( 1) 2 C 2 and ( 1) 3 C 3. Hence, det(t K ) = ( 1) K C K for K 2. If it so happens that det(t K ) = 0, then Proposition 5 implies that C K = 0. In that case, the matrix T K is not invertible, and Eq. (53) for the elements of T 1 K does not hold. Let Δ j,i denote the minor corresponding to the jth row and ith column of T K. Then, from (53) and Proposition 5, we have Δ j,i =( 1) (K i j) [C K E i 1 (j + 1) C i 1 E K (j + 1)] for j = 1,...,i 1, i= 2,...,K, = ( 1) (K i j) C i 1 E K (j + 1) for j = i,...,k, i = 1,...,K, (60) or, alternatively, from (54) and Proposition 5, Δ j,i = ( 1) (K i j) [E K (1)E i 1 (j + 1) E i 1 (1)E K (j + 1)] for i, j = 1,...,K. (61)

14 122 R.K. Mallik / Linear Algebra and its Applications 325 (2001) The tridiagonal matrix inverse We now consider a nonsingular K K tridiagonal matrix Φ K (K 2), with complex entries, given by α 1 β 1 γ 2 α 2 β 2 0 Φ K = , 0 γ K 1 α K 1 β K 1 γ K α K β 1,...,β K 1 /= 0, γ 2,...,γ K /= 0. (62) Without loss of generality, let β K be defined as β K 1, and A i α i, 1 i K, β i B i γ i β i, 2 i K, β K (63a) (63b) such that A 1,...,A K, B 2,...,B K are entries of T K in (51). We can express Φ K as Φ K = diag(β 1,...,β K )T K, and therefore, the inverse Ψ K of Φ K can be expressed as Ψ K = ΦK 1 = [ ( ) ] K 1 ψ i,j i,j=1 = T 1 K diag 1,...,, (64) β 1 where, from (52) and (54), ψ i,j = R i,j β j = 1 [ E i 1 (j + 1) E ] i 1(1)E K (j + 1), β j E K (1) E K (1) /= 0, (65a) such that (using (16) and (63b)) ( ) E n (m)=( 1) n m+1 αm αn β m 1 + (n m+1)/2 β n (k 1,...,k q ) S q (m+1,n) if m = 1,...,n 1, n= 2,...,K, (σ k1 σ kq )

15 R.K. Mallik / Linear Algebra and its Applications 325 (2001) = α n β n if m = n, n = 1,...,K, =1 ifm = n + 1, n= 0,...,K, =0 otherwise (65b) and (using (11) and (63b)) σ k = β k 1γ k. (65c) α k 1 α k The set S q (m + 1,n)in (65b) is defined by (1a) (1c). It is known that for the matrix Ψ K in (64), there exist four sequences {u i } K i=1, {v i } K i=1, {r i} K i=1, {s i} K i=1 [9,10,17] such that ψ i,j = { ui v j if i>j, We can express Ψ K as r i s j if i j. (66) ( (Φ ) T 1 ) T Ψ K = K, (67) where Φ T K can be obtained from Φ K by interchanging the positions of β j and γ j+1 in (62) for j = 1,...,K 1. Defining, without loss of generality, γ K+1 as γ K+1 1, (68) and interchanging β j and γ j+1 for j = 1,...,K in formula (65b) for E n (m), we obtain a quantity F n (m) expressed as ( ) F n (m)=( 1) n m+1 αm α n γ m (n m+1)/2 γ n+1 (k 1,...,k q ) S q (m+1,n) (σ k1 σ kq ) if m = 1,...,n 1, n= 2,...,K, = α n γ n+1 if m = n, n = 1,...,K, =1 ifm = n + 1, n= 0,...,K, =0 otherwise, (69) where σ k, which is invariant under the interchange of β k 1 and γ k, is given by (65c). The elements of Ψ K can alternatively be written in terms of F n (m) as ψ i,j = 1 [ F j 1 (i + 1) F ] j 1(1)F K (i + 1), F K (1) /= 0. (70) γ i+1 F K (1)

16 124 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Observe from (65a) and (69) that, for m n, (β m β n ) F n (m) = E n (m) (γ m+1 γ n+1 ). (71) Therefore, combining (65a), (70) and (71), we obtain F j 1(1)F K (i + 1) γ i+1 F K (1) = E j 1(1)E K (i + 1) (γ j+1 γ i ) if i>j, ψ i,j = β i E K (1) (β j β i 1 ) (72) F K (1) /= 0, E i 1(1)E K (j + 1) β j E K (1) if i j, E K (1) /= 0. Thus, (72), along with (65b) and (69), gives an explicit formula for the element in the ith row and jth column of the inverse of the tridiagonal matrix Φ K in (62). We see that (72) is consistent with the structure (66) of Ψ K. Without loss of generality, substituting r i = E i 1 (1), s j = E K(j + 1) β j E K (1), u i = F K(i + 1) γ i+1 F K (1) = s (γ 2 γ i ) i (β 1 β i 1 ), v j = F j 1 (1) = r j (β 1 β j 1 ) (γ 2 γ j ), we obtain from Proposition 2 the recurrences r i = 1 (α i 1 r i 1 + γ i 1 r i 2 ), β i 1 v j = 1 γ j (α j 1 v j 1 + β j 2 v j 2 ) (73) (74a) (74b) for sequences {r i } and {v j }, respectively, and from Proposition 3 the recurrences s j = 1 β j (α j+1 s j+1 + γ j+2 s j+2 ), (75a) u i = 1 (α i+1 u i+1 + β i+1 u i+2 ) (75b) γ i+1 for sequences {s j } and {u i }, respectively. Recurrences (74a), (74b), (75a) and (75b) agree with those in Theorem 2 of [9]. Note that (66) combined with (73) can also be used to obtain the tridiagonal matrix inverse.

17 R.K. Mallik / Linear Algebra and its Applications 325 (2001) The quantities β K and γ K+1 which have been defined to be 1 cancel out due to multiplication in the expression for ψ i,j in (72) as well as in the expressions for u i,s j in (73). Hence, we can also consider β K and γ K+1 to be arbitrary nonzero complex quantities in the definition of E n (m) in (65b) and F n (m) in (69) An example Consider the 3 3 tridiagonal matrix Φ 3 = α 1 β 1 0 γ 2 α 2 β 2, β 1,β 2 /= 0, γ 2,γ 3 /= 0, 0 γ 3 α 3 which is the matrix in (62) with K = 3. To obtain the inverse of this matrix, we first obtain the sequences {u i } 3 i=1, {v i} 3 i=1, {r i} 3 i=1, {s i} 3 i=1 using (73), and then substitute their expressions in (66). We need the expressions for σ 2 and σ 3 which, from (65c), are given by σ 2 = β 1γ 2, σ 3 = β 2γ 3. α 1 α 2 α 2 α 3 Now, from (73) and (65b), we have r 1 = E 0 (1) = 1, r 2 = E 1 (1) = α 1 β 1, and r 3 = E 2 (1) = α 1α 2 β 1 β 2 (1 + σ 2 ) = 1 β 1 β 2 (α 1 α 2 β 1 γ 2 ), s 1 = E 3(2) β 1 E 3 (1) α 2 α 3 (1 + σ 3 ) β = 2 β ( 3 β 1 α ) 1α 2 α 3 (1 + σ 2 + σ 3 ) β 1 β 2 β 3 (α 2 α 3 β 2 γ 3 ) = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), s 2 = E 3(3) β 2 E 3 (1)

18 126 R.K. Mallik / Linear Algebra and its Applications 325 (2001) ( α ) 3 β 3 = ( β 2 α ) 1α 2 α 3 (1 + σ 2 + σ 3 ) β 1 β 2 β 3 β 1 α 3 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), s 3 = E 3(4) β 3 E 3 (1) 1 = ( β 3 α ) 1α 2 α 3 (1 + σ 2 + σ 3 ) β 1 β 2 β 3 β 1 β 2 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ). By interchanging the positions of β k 1 and γ k,k= 2, 3, 4inr i, we obtain v i,and by doing the same in s i, we obtain u i. Thus, from (73), v 1 = F 0 (1) = 1, v 2 = F 1 (1) = α 1 γ 2, and v 3 = F 2 (1) = α 1α 2 γ 2 γ 3 (1 + σ 2 ) = 1 γ 2 γ 3 (α 1 α 2 β 1 γ 2 ), u 1 = F 3(2) γ 2 F 3 (1) α 2 α 3 (1 + σ 3 ) γ = 3 γ ( 4 β 1 α ) 1α 2 α 3 (1 + σ 2 + σ 3 ) γ 2 γ 3 γ 4 (α 2 α 3 β 2 γ 3 ) = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), u 2 = F 3(3) γ 3 F 3 (1)

19 R.K. Mallik / Linear Algebra and its Applications 325 (2001) ( α ) 3 γ 4 = ( γ 3 α ) 1α 2 α 3 (1 + σ 2 + σ 3 ) γ 2 γ 3 γ 4 γ 2 α 3 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), u 3 = F 3(4) γ 4 F 3 (1) 1 = ( γ 4 α ) 1α 2 α 3 (1 + σ 2 + σ 3 ) γ 2 γ 3 γ 4 γ 2 γ 3 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ). Note that the expressions for r 1,r 2,r 3, s 1,s 2,s 3, u 1,u 2,u 3, v 1,v 2,v 3 do not contain β 3 or γ 4. From (66), the diagonal elements of the inverse of Φ 3 are then given by ψ 1,1 = u 1 v 1 = ψ 2,2 = u 2 v 2 = (α 2 α 3 β 2 γ 3 ) (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), α 1 α 3 (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), (α 1 α 2 β 1 γ 2 ) ψ 3,3 = u 3 v 3 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ). The upper diagonal elements are given by β 1 α 3 ψ 1,2 = r 1 s 2 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), ψ 1,3 = r 1 s 3 = β 1 β 2 (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), α 1 β 2 ψ 2,3 = r 2 s 3 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ). Each of the lower diagonal elements ψ i,j is obtained by interchanging β k 1 and γ k, k = 2, 3 in the corresponding upper diagonal element ψ j,i. Therefore, γ 2 α 3 ψ 2,1 = u 2 v 1 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ), ψ 3,1 = u 3 v 1 = γ 2 γ 3 (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ),

20 128 R.K. Mallik / Linear Algebra and its Applications 325 (2001) α 1 γ 3 ψ 3,2 = u 3 v 2 = (α 1 α 2 α 3 β 1 γ 2 α 3 β 2 γ 3 α 1 ). 5. Characteristic polynomial The tridiagonal matrix of (62) satisfies Φ K = diag(β 1,...,β K )T K, (76) where T K is given by (51) with A i = α i, 1 i K, B i = γ i, 2 i K, β K = 1, β i β i as in (63a), (63b). From (76) and Proposition 5, we get det(φ K ) = ( 1) K (β 1 β K )C K = ( 1) K (β 1 β K )E K (1). (77) Using (65b), this can be rewritten as K/2 det(φ K ) = (α 1 α K ) 1+ (σ k1 σ kq ), (78) (k 1,...,k q ) S q (2,K) where σ k = ((β k 1 γ k )/(α k 1 α k )) as in (65c). Now the characteristic polynomial of Φ K is given by det(λi K Φ K ) = ( 1) K det(φ K λi K ), (79) where I K denotes the K K identity matrix. By replacing α k by (λ α k ) for k = 1,...,K in (79), we get det(λi K Φ K )=(λ α 1 ) (λ α K ) K/2 1+ (k 1,...,k q ) S q (2,K) η k1 (λ) η kq (λ), (80a) where β k 1 γ k η k (λ) = (λ α k 1 )(λ α k ), (80b) and S q (2,K) is given by (1a) (1c). Thus, (80a) and (80b) give an explicit formula for the characteristic polynomial of a tridiagonal matrix. For example, when K = 4weget det (λi 4 Φ 4 )=(λ α 1 )(λ α 2 )(λ α 3 )(λ α 4 ) [1 + η 2 (λ) + η 3 (λ) + η 4 (λ) + η 2 (λ)η 4 (λ)] =(λ α 1 )(λ α 2 )(λ α 3 )(λ α 4 ) β 1 γ 2 (λ α 3 )(λ α 4 ) β 2 γ 3 (λ α 1 )(λ α 4 ) β 3 γ 4 (λ α 1 )(λ α 2 ) + β 1 γ 2 β 3 γ 4.

21 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Orthogonal polynomials Consider a set {p n (t)} n 0 of polynomials that are orthogonal on the interval [a,b] with respect to some nonnegative weight function w(t) [18]. The polynomial p n (t) has degree n, and satisfies the second-order homogeneous difference equation [18,19] β n p n (t) = (t α n )p n 1 (t) γ n p n 2 (t), n 1, (81) where β n,p n = 0, p 0 (t) is some nonzero constant, andp 1 (t) is a function of t, which may be identically zero. The difference equation (81) can be rewritten as p n (t) = (t α n) p n 1 (t) γ n p n 2 (t), n 1, (82) β n β n with initial values p 1 (t) and p 0 (t). Comparing (82) with the homogeneous difference equation (10), we can write y n+2 = p n (t), n = 1, 2,..., (83a) A n = (t α n), β n B n = γ n. β n (83b) From (12) (14b), (83a) and (83b), the solution of (82) is given by p n (t) = C n (t)p 0 (t) + D n (t)p 1 (t), n 0, (84) where C 0 (t) = 1, C 1 (t) = (t α 1), β 1 (85a) and D 0 (t) = 0, D 1 (t) = γ 1 β 1, D 2 (t) = γ 1(t α 2 ) β 1 β 2 C n (t)= (t α 1) (t α n ) β 1 β n n/2 1+ (k 1,...,k q ) S q (2,n) (85b) η k1 (t) η kq (t) for n 2, (86a) D n (t)= γ 1(t α 2 ) (t α n ) β 1 β n (n 1)/2 1+ (k 1,...,k q ) S q (3,n) η k1 (t) η kq (t) for n 3, (86b)

22 130 R.K. Mallik / Linear Algebra and its Applications 325 (2001) such that β k 1 γ k η k (t) = (t α k 1 )(t α k ) as in (80b). Note that C n (t) is a polynomial of degree n for n 0, while D n (t) is a polynomial of degree n 1forn 1. Thus, (84) is an expression for p n (t), n 0, interms of the recurrence coefficients α 1,...,α n,β 1,...,β n,γ 1,...,γ n and initial values p 1 (t), p 0 (t). Let p 0 (t) p(t) =.. (87) p K 1 (t) denote the K 1 vector of orthogonal polynomials, and Φ K the tridiagonal matrix in (62) of coefficients α 1,...,α K, β 1,...,β K 1, γ 2,...,γ K of the difference equation (81) for n = 1,...,K. We can rewrite (81) for n = 1,...,K as [19,20] (ti K Φ K )p(t) = β K p K (t)e K + γ 1 p 1 (t)e 1, (88) where e 1 and e K are the first and last columns, respectively, of I K.Ift is not an eigenvalue of Φ K, we get from (88) p(t) = β K p K (t)(φ K ti K ) 1 e K γ 1 p 1 (t)(φ K ti K ) 1 e 1. (89) Replacing each diagonal element α k by α k t in Φ K, we obtain the matrix Φ K ti K. Thus, each of the polynomials p 0 (t),...,p K 1 (t) can be expressed in terms of the inverse of the tridiagonal matrix Φ K ti K and the functions p K (t) and p 1 (t). The vector (Φ K ti K ) 1 e K is the Kth column of (Φ K ti K ) 1. Replacing α k by (t α k ), k = 1,...,K in (65b), we obtain from (72) the following expression for ψ n+1,k (t), the element in the (n + 1)th row and Kth column of (Φ K ti K ) 1 : ψ n+1,k (t) = (t α 1 ) (t α n ) β 1 β n β K (t α 1 ) (t α K ) β 1 β K 1+ n/2 1+ K/2 (k 1,...,k q ) S q (2,n) (k 1,...,k q ) S q (2,K) η k1 (t) η kq (t) η k1 (t) η kq (t). (90) Similarly, by replacing α k by (t α k ), k = 1,...,K in (69), we obtain from (72) the following expression for ψ n+1,1 (t), the element in the (n + 1)th row and first column of (Φ K ti K ) 1 :

23 ψ n+1,1 (t) = R.K. Mallik / Linear Algebra and its Applications 325 (2001) (t α n+2 ) (t α K ) γ n+3 γ K+1 γ n+2 (t α 1 ) (t α K ) γ 2 γ K+1 (K n 1)/2 1+ η k1 (t) η kq (t) (k 1,...,k q ) S q (n+3,k) K/2 1+ η k1 (t) η kq (t) (k 1,...,k q ) S q (2,K). (91) Now p n (t) is the (n + 1)th row of p in (89). Therefore, combining (89), (90) and (91), we obtain, for n = 0,...,K 1, n/2 (β n+1 β K ) 1+ (k 1,...,k q ) S q (2,n) p n (t) = K/2 (t α n+1 ) (t α K ) 1+ (k 1,...,k q ) S q (2,K) η k1 (t) η kq (t) η k1 (t) η kq (t) p K (t) + (K n 1)/2 (γ 1 γ n+1 ) 1+ η k1 (t) η kq (t) (k 1,...,k q ) S q (n+3,k) p K/2 1 (t), (t α 1 ) (t α n+1 ) 1+ η k1 (t) η kq (t) (k 1,...,k q ) S q (2,K) (92) where η k (t) is given by (80b). Thus, (92) is an expression for the K intermediate polynomials p n (t), 0 n K 1, in terms of the recurrence coefficients α 1,..., α K, β 1,...,β K, γ 1,...,γ K and boundary values p 1 (t), p K (t). We now show how the general formula for p n (t) in (84) can be used to obtain expressions for Legendre, Hermite and Chebyshev polynomials Legendre polynomials Legendre polynomials can be generated by the difference equation p n (t) = (2n 1) tp n 1 (t) n (n 1) p n 2 (t), n 1, (93) n with p 0 (t) = 1. The interval of orthogonality is [ 1, 1] and the weight function w(t) = 1. Comparing (93) with (82), we obtain α n = 0, β n = n (2n 1), γ (n 1) n =, n 1. (94) (2n 1)

24 132 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Since γ 1 = 0, we have D n (t) = 0. Therefore, p 1 (t) does not affect the solution of (93). Now, from (80b), (k 1) 2 η k (t) = (2k 3)(2k 1)t 2. (95) Using (86a), we can write the solution of (93) as p n (t)=c n (t) = (2n)! 2 n (n!) 2 n/2 t n + ( 1) q t n 2q Denote G n,q as G n,q = q (k 1,...,k q ) S q (2,n) i=1 Applying property (3) on S q (2,n)we get (k 1,...,k q ) S q (2,n) i=1 q (k i 1) 2. (96) (2k i 3)(2k i 1) (k i 1) 2 (2k i 3)(2k i 1). (97) S q (2,n)=S q (2,n 1) { (k 1,...,k q 1,k q ): k q = n; (k 1,...,k q 1 ) S q 1 (2,n 2) }, (98) which implies that G n,q in (97) follows the recurrence (n 1) 2 G n,q = G n 1,q + (2n 3)(2n 1) G n 2,q 1. (99) From (99), it can be shown by mathematical induction that (n!) 2 (2n 2q)! G n,q = (2n)!q!(n q)!(n 2q)!. (100) Combining (96), (97) and (100), we finally obtain p n (t) = n/2 ( 1) q (2n 2q)! 2 n q!(n q)!(n 2q)! tn 2q, (101) which is the expression for the Legendre polynomial of degree n. This can alternatively be written as p n (t) = 1 d n [( t 2 2 n n! dt n 1 ) n].

25 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Hermite polynomials Hermite polynomials can be generated by the recurrence p n (t) = tp n 1 (t) (n 1)p n 2 (t), n 1, (102) with p 0 (t) = 1. The interval of orthogonality is (, ) and the weight function w(t) = e (t2 /2). Comparing (102) with (82), we get α n = 0, β n = 1, γ n = (n 1), n 1. (103) Since γ 1 = 0, we have D n (t) = 0; hence, p 1 (t) does not affect the solution of (102). From (80b), (k 1) η k (t) = t 2. (104) We can then write the solution of (102) as n/2 q p n (t) = C n (t) = t n + ( 1) q t n 2q (k i 1). (105) Denoting G n,q as G n,q = (k 1,...,k q ) S q (2,n) i=1 (k 1,...,k q ) S q (2,n) i=1 q (k i 1), (106) and applying property (3) on S q (2,n)we obtain the recurrence G n,q = G n 1,q + (n 1)G n 2,q 1. (107) It can be shown by mathematical induction using this recurrence that n! G n,q = 2 q q!(n 2q)!. (108) Combining (105), (106) and (108), we get p n (t) = n/2 q=0 ( 1) q n! 2 q q!(n 2q)! tn 2q, (109) which is the expression for the Hermite polynomial of degree n. This can alternatively be written as p n (t) = ( 1) n e (t2 /2) dn [ e (t 2 /2) ] dt n Chebyshev polynomials Chebyshev polynomials can be generated by the difference equation

26 134 R.K. Mallik / Linear Algebra and its Applications 325 (2001) p n (t) = 2tp n 1 (t) p n 2 (t), n 1, (110) whose coefficients do not vary with n. The interval of orthogonality is [ 1, 1] and the weight function w(t) = 1/ 1 t 2. Here and α n = 0, β n = 1 2 = γ n, n 1, (111) η k (t) = 1 4t 2. (112) Since η k (t) does not depend on k, we can apply property (2) on S q (2,n)and S q (3,n) in (86a) and (86b). This results in followed by S q (2,n) = ( n q q n/2 C n (t) = (2t) n + ), S q (3,n) = ( 1) q ( n q q ( n q 1 q ) (2t) n 2q, ), (113) (114a) D n (t) = C n 1 (t). For the Chebyshev polynomials of the first kind,wehave p 0 (t) = 1, p 1 (t) = t, and therefore (114b) p n (t)=c n (t) + td n (t) =C n (t) tc n 1 (t) = 1 n/2 2 (2t)n + ( 1) q [( n q q ) + ( )] n q 1 q (2t) n 2q. (115) This can also be expressed as p n (t) = cos ( n cos 1 (t) ). For the Chebyshev polynomials of the second kind,wehave p 0 (t) = 1, p 1 (t) = 0, and therefore p n (t) = C n (t), wherec n (t) is given by (114a). This can also be expressed as p n (t) = sin( (n + 1) cos 1 (t) ) sin ( cos 1 (t) ) = sin ( (n + 1) cos 1 (t) ). 1 t 2

27 R.K. Mallik / Linear Algebra and its Applications 325 (2001) A cyclic tridiagonal system Consider the second-order linear difference equation over the finite index interval [1,K+ 2], K 2, given by β n y n+2 = α n y n+1 γ n y n + β n x n+2, 1 n K, (116) with variable complex coefficients β n,α n,γ n,whereβ n,γ n /= 0, complex forcing term x n+2, and complex boundary values y 1,y K+2. The boundary values are periodic with period K, that is, we have the boundary conditions y 1 = y K+1, y K+2 = y 2. (117) The K equations in (116) can be written in matrix form after applying the boundary conditions as Λ K y 2. = diag(β 1,...,β K ) y K+1 x 3. x K+2 where α 1 β 1 γ 1 γ 2 α 2 β 2 0 Λ K = , 0 γ K 1 α K 1 β K 1 β K γ K α K, (118a) β 1,...,β K /= 0, γ 1,...,γ K /= 0. (118b) The system ((118a), (118b)) of equations is a cyclic linear tridiagonal system [21], and the matrix Λ K in (118b) a cyclic tridiagonal matrix. The solution of system ((118a), (118b)), or alternatively, of the difference equation (116) with boundary conditions (117) for n = 1,...,K can be expressed as y 2. = Λ 1 K y K+1 β 1 x 3. β K x K+2. (119) Let Λ 1 K = [ ] K ω i,j i,j=1. (120) The difference equation (116) can be rewritten as y n+2 = A n y n+1 + B n y n + x n+2, 1 n K, (121a) as in (41) with A n = α n β n, B n = γ n β n. (121b)

28 136 R.K. Mallik / Linear Algebra and its Applications 325 (2001) From Proposition 4, the quantities y 3,...,y K+2 of recurrence (121a) can be expressed in terms of y 1,y 2 as i 1 y i+1 = C i 1 y 2 + D i 1 y 1 + E i 1 (j + 1)x j+2, 2 i K + 1, (122) j=1 where E n (m) is given by (65b), and from (13a), (13b), (14a), (14b) and (121b), C 1 = α 1, D 1 = γ 1, D 2 = γ 1α 2, (123a) β 1 β 1 β 1 β 2 ( ) n/2 C n = ( 1) n α1 αn 1+ (σ k1 σ kq ) β 1 β n (k 1,...,k q ) S q (2,n) for n 2, (123b) D n = ( 1) n γ ( ) (n 1)/2 1 α2 αn 1+ β 1 β 2 β n (k 1,...,k q ) S q (3,n) (σ k1 σ kq ) for n 3, (123c) such that S q (L, U) is defined by (1a) (1c), and σ k = ((β k 1 γ k )/(α k 1 α k )) as in (65c). Putting i = K in (122) and applying the boundary condition y K+1 = y 1 in the resulting equation, we get (1 D K 1 )y 1 C K 1 y 2 = K 1 j=1 E K 1 (j + 1)x j+2. Similarly, putting i = K + 1andy K+2 = y 2 in (122) results in K D K y 1 + (1 C K )y 2 = E K (j + 1)x j+2. j=1 We can then express y 1 and y 2 in terms of x 3,...,x K+2 as K 1 [ ] [ ] E 1 y1 1 DK 1 C K 1 (j + 1)x j+2 K 1 j=1 =. (124) y 2 D K 1 C K K E K (j + 1)x j+2 Substituting (124) in (122), we can express y i+1 as K y i+1 = ω i,j β j x j+2, (125) i=1 j=1

29 R.K. Mallik / Linear Algebra and its Applications 325 (2001) which implies from (119) and (120) that ω i,j is the element in the ith row and jth column of Λ 1. Thus, we can obtain an explicit formula for the inverse of a cyclic tridiagonal matrix by solving a boundary value problem with periodic boundary conditions. 8. Case of constant diagonals When α n = α, β n = β, γ n = γ, n = 1,...,K, (126) the tridiagonal matrix Φ K in (62) is Toeplitz, while the cyclic tridiagonal matrix Λ K in (118b) is circulant, which also implies that it is Toeplitz. The quantities C n, D n and E n (m) in (65b) and (123a) (123c) can now be expressed in terms of the Chebyshev polynomials of the second kind discussed in Section 6.3 as ( ) γ n m+1 ( ) α E n (m) = ( 1) n m+1 p n m+1 β 2, γβ ( ) γ n ( ) α C n = E n (1) = ( 1) n p n β 2, γβ ( ) γ n+1 ( ) α D n = ( 1) n p n 1 β 2, γβ where, using (114a), ( ) ( ) α α n n/2 p n 2 ( )( ) = 1 + ( 1) q n q γβ q γβ γβ q α 2. From (72), the elements of the inverse of the tridiagonal matrix Φ K in (62) under condition (126) can be expressed as ( ) ( ) α α ( 1) i j ( ) γ i j p j 1 2 p K i γβ 2 γβ ( ) if i>j, γβ β α p K 2 γβ ψ i,j = ( ) ( ) α α ( 1) j i ( ) j i p i 1 β 2 p K j γβ 2 γβ ( ) if i j, γβ γ α p K 2 γβ E K (1) /= 0. The inverse of the cyclic tridiagonal matrix Λ K in (118b) under condition (126) can also be obtained from E n (m), C n,d n using the approach discussed in Section 7.

30 138 R.K. Mallik / Linear Algebra and its Applications 325 (2001) Conclusions We have obtained explicit formulae for the elements of the inverse of a general tridiagonal matrix by deriving the explicit solution of a second-order linear nonhomogeneous difference equation with variable coefficients, and then applying the solution to a boundary value problem with zero boundary values. Using the formula for the determinant, we have got an expression for the characteristic polynomial. A connection between the matrix inverse and orthogonal polynomials has also been established. It has also been shown that how an application of the solution of a secondorder linear difference equation to a boundary value problem with periodic boundary conditions can yield the inverse of a cyclic tridiagonal matrix. In the simple case of a tridiagonal or a cyclic tridiagonal matrix with constant diagonals, the elements of the inverse can be expressed in terms of the Chebyshev polynomials of the second kind. Acknowledgments The author thanks Dr. Y.V.S.S. Sanyasi Raju, Department of Mathematics, Indian Institute of Technology, Chennai, for his valuable suggestions. He also expresses his gratitude to the anonymous referee for his/her constructive comments. References [1] G. Meurant, A review on the inverse of symmetric tridiagonal and block tridiagonal matrices, SIAM J. Matrix Anal. Appl. 13 (3) (1992) [2] S.-F. Xu, On the Jacobi matrix inverse eigenvalue problem with mixed given data, SIAM J. Matrix Anal. Appl. 17 (3) (1996) [3] A. Bunse-Gerstner, R. Byers, V. Mehrmann, A chart of numerical methods for structured eigenvalue problems, SIAM J. Matrix Anal. Appl. 13 (2) (1992) [4] C.F. Fischer, R.A. Usmani, Properties of some tridiagonal matrices and their application to boundary value problems, SIAM J. Numer. Anal. 6 (1) (1969) [5] L. Lu, W. Sun, The minimal eigenvalues of a class of block-tridiagonal matrices, IEEE Trans. Inform. Theory 43 (2) (1997) [6] G. D. Smith, Numerical Solution of Partial Differential Equations: Finite Difference Methods, second ed., Oxford University Press, New York, [7] R.M.M. Mattheij, M.D. Smooke, Estimates for the inverse of tridiagonal matrices arising in boundary-value problems, Linear Algebra Appl. 73 (1986) [8] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., Johns Hopkins University Press, Baltimore, MD, [9] T. Yamamoto, Y. Ikebe, Inversion of band matrices, Linear Algebra Appl. 24 (1979) [10] W.W. Barrett, A theorem on inverses of tridiagonal matrices, Linear Algebra Appl. 27 (1979) [11] W.W. Barrett, P.J. Feinsilver, Inverses of banded matrices, Linear Algebra Appl. 41 (1981)

31 R.K. Mallik / Linear Algebra and its Applications 325 (2001) [12] F. Romani, On the additive structure of the inverses of banded matrices, Linear Algebra Appl. 80 (1986) [13] R. Nabben, Two-sided bounds on the inverses of diagonally dominant tridiagonal matrices, Linear Algebra Appl. 287 (1999) [14] W.G. Kelley, A.C. Peterson, Difference Equations: An Introduction with Applications, Academic Press, San Diego, CA, [15] S.N. Elaydi, An Introduction to Difference Equations, Springer, New York, [16] R.K. Mallik, On the solution of a second order linear homogeneous difference equation with variable coefficients, J. Math. Anal. Appl. 215 (1) (1997) [17] Y. Ikebe, On inverses of Hessenberg matrices, Linear Algebra Appl. 24 (1979) [18] G. Szego, Orthogonal Polynomials, fourth ed., AMS, Providence, RI, [19] S. Elhay, G.H. Golub, J. Kautsky, Jacobi matrices for sums of weight functions, BIT 32 (1992) [20] S. Elhay, G.H. Golub, J. Kautsky, Updating and downdating of orthogonal polynomials with data fitting applications, SIAM J. Matrix Anal. Appl. 12 (2) (1991) [21] C. Temperton, Algorithms for the solution of cyclic tridiagonal systems, J. Comput. Physics 19 (1975)

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 430 (2009) 532 543 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Computing tight upper bounds

More information

Sums of diagonalizable matrices

Sums of diagonalizable matrices Linear Algebra and its Applications 315 (2000) 1 23 www.elsevier.com/locate/laa Sums of diagonalizable matrices J.D. Botha Department of Mathematics University of South Africa P.O. Box 392 Pretoria 0003

More information

The generalized order-k Fibonacci Pell sequence by matrix methods

The generalized order-k Fibonacci Pell sequence by matrix methods Journal of Computational and Applied Mathematics 09 (007) 33 45 wwwelseviercom/locate/cam The generalized order- Fibonacci Pell sequence by matrix methods Emrah Kilic Mathematics Department, TOBB University

More information

Inverses of regular Hessenberg matrices

Inverses of regular Hessenberg matrices Proceedings of the 10th International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2010 27 30 June 2010. Inverses of regular Hessenberg matrices J. Abderramán

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

RETRACTED On construction of a complex finite Jacobi matrix from two spectra

RETRACTED On construction of a complex finite Jacobi matrix from two spectra Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 On construction of a complex finite Jacobi matrix from two spectra Gusein Sh. Guseinov guseinov@ati.edu.tr Follow this and additional

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES Indian J Pure Appl Math, 49: 87-98, March 08 c Indian National Science Academy DOI: 0007/s36-08-056-9 INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES João Lita da Silva Department of Mathematics and

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Minimum number of non-zero-entries in a 7 7 stable matrix

Minimum number of non-zero-entries in a 7 7 stable matrix Linear Algebra and its Applications 572 (2019) 135 152 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Minimum number of non-zero-entries in a

More information

Hierarchical open Leontief models

Hierarchical open Leontief models Available online at wwwsciencedirectcom Linear Algebra and its Applications 428 (2008) 2549 2559 wwwelseviercom/locate/laa Hierarchical open Leontief models JM Peña Departamento de Matemática Aplicada,

More information

Upper triangular matrices and Billiard Arrays

Upper triangular matrices and Billiard Arrays Linear Algebra and its Applications 493 (2016) 508 536 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Upper triangular matrices and Billiard Arrays

More information

Research Article The Adjacency Matrix of One Type of Directed Graph and the Jacobsthal Numbers and Their Determinantal Representation

Research Article The Adjacency Matrix of One Type of Directed Graph and the Jacobsthal Numbers and Their Determinantal Representation Applied Mathematics Volume 20, Article ID 423163, 14 pages doi:101155/20/423163 Research Article The Adjacency Matrix of One Type of Directed Graph and the Jacobsthal Numbers and Their Determinantal Representation

More information

Orthogonal polynomials

Orthogonal polynomials Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed

More information

Determinant evaluations for binary circulant matrices

Determinant evaluations for binary circulant matrices Spec Matrices 014; :187 199 Research Article Open Access Christos Kravvaritis* Determinant evaluations for binary circulant matrices Abstract: Determinant formulas for special binary circulant matrices

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

Determinant formulas for multidimensional hypergeometric period matrices

Determinant formulas for multidimensional hypergeometric period matrices Advances in Applied Mathematics 29 (2002 137 151 www.academicpress.com Determinant formulas for multidimensional hypergeometric period matrices Donald Richards a,b,,1 andqifuzheng c,2 a School of Mathematics,

More information

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

On the critical group of the n-cube

On the critical group of the n-cube Linear Algebra and its Applications 369 003 51 61 wwwelseviercom/locate/laa On the critical group of the n-cube Hua Bai School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA Received

More information

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Bhaskar Ramasubramanian 1, Swanand R. Khare and Madhu N. Belur 3 December 17, 014 1 Department of Electrical

More information

Section 9.2: Matrices.. a m1 a m2 a mn

Section 9.2: Matrices.. a m1 a m2 a mn Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in

More information

Spectrally arbitrary star sign patterns

Spectrally arbitrary star sign patterns Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,

More information

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

INTERLACING PROPERTIES FOR HERMITIAN MATRICES WHOSE GRAPH IS A GIVEN TREE

INTERLACING PROPERTIES FOR HERMITIAN MATRICES WHOSE GRAPH IS A GIVEN TREE INTERLACING PROPERTIES FOR HERMITIAN MATRICES WHOSE GRAPH IS A GIVEN TREE C M DA FONSECA Abstract We extend some interlacing properties of the eigenvalues of tridiagonal matrices to Hermitian matrices

More information

A property concerning the Hadamard powers of inverse M-matrices

A property concerning the Hadamard powers of inverse M-matrices Linear Algebra and its Applications 381 (2004 53 60 www.elsevier.com/locate/laa A property concerning the Hadamard powers of inverse M-matrices Shencan Chen Department of Mathematics, Fuzhou University,

More information

On the Inverting of a General Heptadiagonal Matrix

On the Inverting of a General Heptadiagonal Matrix British Journal of Applied Science & Technology 18(5): 1-, 2016; Article no.bjast.313 ISSN: 2231-0843, NLM ID: 101664541 SCIENCEDOMAIN international www.sciencedomain.org On the Inverting of a General

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 198 NOTES ON MATRIX METHODS 1. Matrix Algebra Margenau and Murphy, The Mathematics of Physics and Chemistry, Chapter 10, give almost

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

ON QUADRAPELL NUMBERS AND QUADRAPELL POLYNOMIALS

ON QUADRAPELL NUMBERS AND QUADRAPELL POLYNOMIALS Hacettepe Journal of Mathematics and Statistics Volume 8() (009), 65 75 ON QUADRAPELL NUMBERS AND QUADRAPELL POLYNOMIALS Dursun Tascı Received 09:0 :009 : Accepted 04 :05 :009 Abstract In this paper we

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Orthogonal similarity of a real matrix and its transpose

Orthogonal similarity of a real matrix and its transpose Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 382 392 www.elsevier.com/locate/laa Orthogonal similarity of a real matrix and its transpose J. Vermeer Delft University

More information

The Group Inverse of extended Symmetric and Periodic Jacobi Matrices

The Group Inverse of extended Symmetric and Periodic Jacobi Matrices GRUPO MAPTHE Preprint 206 February 3, 206. The Group Inverse of extended Symmetric and Periodic Jacobi Matrices S. Gago Abstract. In this work, we explicitly compute the group inverse of symmetric and

More information

Perron eigenvector of the Tsetlin matrix

Perron eigenvector of the Tsetlin matrix Linear Algebra and its Applications 363 (2003) 3 16 wwwelseviercom/locate/laa Perron eigenvector of the Tsetlin matrix RB Bapat Indian Statistical Institute, Delhi Centre, 7 SJS Sansanwal Marg, New Delhi

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

Trigonometric Recurrence Relations and Tridiagonal Trigonometric Matrices

Trigonometric Recurrence Relations and Tridiagonal Trigonometric Matrices International Journal of Difference Equations. ISSN 0973-6069 Volume 1 Number 1 2006 pp. 19 29 c Research India Publications http://www.ripublication.com/ijde.htm Trigonometric Recurrence Relations and

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 436 (2012) 2054 2066 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa An eigenvalue

More information

c 1998 Society for Industrial and Applied Mathematics

c 1998 Society for Industrial and Applied Mathematics SIAM J MARIX ANAL APPL Vol 20, No 1, pp 182 195 c 1998 Society for Industrial and Applied Mathematics POSIIVIY OF BLOCK RIDIAGONAL MARICES MARIN BOHNER AND ONDŘEJ DOŠLÝ Abstract his paper relates disconjugacy

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method Journal of Mathematics Research; Vol 6, No ; 014 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Ma 227 Review for Systems of DEs

Ma 227 Review for Systems of DEs Ma 7 Review for Systems of DEs Matrices Basic Properties Addition and subtraction: Let A a ij mn and B b ij mn.then A B a ij b ij mn 3 A 6 B 6 4 7 6 A B 6 4 3 7 6 6 7 3 Scaler Multiplication: Let k be

More information

Transformations to rank structures by unitary similarity

Transformations to rank structures by unitary similarity Linear Algebra and its Applications 402 (2005) 126 134 www.elsevier.com/locate/laa Transformations to rank structures by unitary similarity Roberto Bevilacqua, Enrico Bozzo, Gianna M. Del Corso Dipartimento

More information

Product distance matrix of a tree with matrix weights

Product distance matrix of a tree with matrix weights Product distance matrix of a tree with matrix weights R B Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India email: rbb@isidacin Sivaramakrishnan Sivasubramanian

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

On an Inverse Problem for a Quadratic Eigenvalue Problem

On an Inverse Problem for a Quadratic Eigenvalue Problem International Journal of Difference Equations ISSN 0973-6069, Volume 12, Number 1, pp. 13 26 (2017) http://campus.mst.edu/ijde On an Inverse Problem for a Quadratic Eigenvalue Problem Ebru Ergun and Adil

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the

More information

HIGHER-ORDER DIFFERENCES AND HIGHER-ORDER PARTIAL SUMS OF EULER S PARTITION FUNCTION

HIGHER-ORDER DIFFERENCES AND HIGHER-ORDER PARTIAL SUMS OF EULER S PARTITION FUNCTION ISSN 2066-6594 Ann Acad Rom Sci Ser Math Appl Vol 10, No 1/2018 HIGHER-ORDER DIFFERENCES AND HIGHER-ORDER PARTIAL SUMS OF EULER S PARTITION FUNCTION Mircea Merca Dedicated to Professor Mihail Megan on

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Quadratic Gauss sums on matrices

Quadratic Gauss sums on matrices Linear Algebra and its Applications 384 (2004) 187 198 www.elsevier.com/locate/laa Quadratic Gauss sums on matrices Mitsuru Kuroda 410, Haitsu-hosha, 61, Nojiri-cho, Sakai, Osaka 599-8116, Japan Received

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications xxx (2008) xxx xxx Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: wwwelseviercom/locate/laa Graphs with three distinct

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

The total positivity interval

The total positivity interval Linear Algebra and its Applications 393 (2004) 197 202 wwwelseviercom/locate/laa The total positivity interval GML Gladwell a,, K Ghanbari b a Department of Civil Engineering, University of Waterloo, Waterloo,

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

The minimum rank of matrices and the equivalence class graph

The minimum rank of matrices and the equivalence class graph Linear Algebra and its Applications 427 (2007) 161 170 wwwelseviercom/locate/laa The minimum rank of matrices and the equivalence class graph Rosário Fernandes, Cecília Perdigão Departamento de Matemática,

More information

v = 1(1 t) + 1(1 + t). We may consider these components as vectors in R n and R m : w 1. R n, w m

v = 1(1 t) + 1(1 + t). We may consider these components as vectors in R n and R m : w 1. R n, w m 20 Diagonalization Let V and W be vector spaces, with bases S = {e 1,, e n } and T = {f 1,, f m } respectively Since these are bases, there exist constants v i and w such that any vectors v V and w W can

More information

On homogeneous Zeilberger recurrences

On homogeneous Zeilberger recurrences Advances in Applied Mathematics 40 (2008) 1 7 www.elsevier.com/locate/yaama On homogeneous Zeilberger recurrences S.P. Polyakov Dorodnicyn Computing Centre, Russian Academy of Science, Vavilova 40, Moscow

More information

Impulse Response Sequences and Construction of Number Sequence Identities

Impulse Response Sequences and Construction of Number Sequence Identities 1 3 47 6 3 11 Journal of Integer Sequences, Vol. 16 (013), Article 13.8. Impulse Response Sequences and Construction of Number Sequence Identities Tian-Xiao He Department of Mathematics Illinois Wesleyan

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Matrices with convolutions of binomial functions and Krawtchouk matrices

Matrices with convolutions of binomial functions and Krawtchouk matrices Available online at wwwsciencedirectcom Linear Algebra and its Applications 429 (2008 50 56 wwwelseviercom/locate/laa Matrices with convolutions of binomial functions and Krawtchouk matrices orman C Severo

More information

Equivalence constants for certain matrix norms II

Equivalence constants for certain matrix norms II Linear Algebra and its Applications 420 (2007) 388 399 www.elsevier.com/locate/laa Equivalence constants for certain matrix norms II Bao Qi Feng a,, Andrew Tonge b a Department of Mathematical Sciences,

More information

On Powers of General Tridiagonal Matrices

On Powers of General Tridiagonal Matrices Applied Mathematical Sciences, Vol. 9, 5, no., 583-59 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.988/ams.5.49 On Powers of General Tridiagonal Matrices Qassem M. Al-Hassan Department of Mathematics

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field Journal of Symbolic Computation 47 (2012) 480 491 Contents lists available at SciVerse ScienceDirect Journal of Symbolic Computation journal homepage: wwwelseviercom/locate/jsc On the Berlekamp/Massey

More information

The spectral decomposition of near-toeplitz tridiagonal matrices

The spectral decomposition of near-toeplitz tridiagonal matrices Issue 4, Volume 7, 2013 115 The spectral decomposition of near-toeplitz tridiagonal matrices Nuo Shen, Zhaolin Jiang and Juan Li Abstract Some properties of near-toeplitz tridiagonal matrices with specific

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Numerical integration formulas of degree two

Numerical integration formulas of degree two Applied Numerical Mathematics 58 (2008) 1515 1520 www.elsevier.com/locate/apnum Numerical integration formulas of degree two ongbin Xiu epartment of Mathematics, Purdue University, West Lafayette, IN 47907,

More information

Spectra of Semidirect Products of Cyclic Groups

Spectra of Semidirect Products of Cyclic Groups Spectra of Semidirect Products of Cyclic Groups Nathan Fox 1 University of Minnesota-Twin Cities Abstract The spectrum of a graph is the set of eigenvalues of its adjacency matrix A group, together with

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information