BLOCK DIAGONAL AND SCHUR COMPLEMENT PRECONDITIONERS FOR BLOCK TOEPLITZ SYSTEMS WITH SMALL SIZE BLOCKS
|
|
- Nelson McKenzie
- 6 years ago
- Views:
Transcription
1 BLOCK DIAGONAL AND SCHUR COMPLEMENT PRECONDITIONERS FOR BLOCK TOEPLITZ SYSTEMS WITH SMALL SIZE BLOCKS WAI-KI CHING, MICHAEL K NG, AND YOU-WEI WEN Abstract In this paper we consider the solution of Hermitian positive definite block-toeplitz systems with small size blocks We propose and study block diagonal and Schur complement preconditioners for such block-toeplitz matrices We show that for some block-toeplitz matrices, the spectra of the preconditioned matrices are uniformly bounded except for a fixed number of outliers and this fixed number depends only on the size of block Hence conjugate gradient type methods, when applied to solving these preconditioned block-toeplitz systems with small size blocks, converge very fast Recursive computation of such block diagonal and Schur complement preconditioners are considered by using the nice matrix representation of the inverse of a block-toeplitz matrix Applications to block-toeplitz systems arising from least squares filtering problems and queueing networks are presented Numerical examples are given to demonstrate the effectiveness of the proposed method AMS subject classifications 65F, 65N2 Key words Block-Toeplitz matrix, block diagonal, Schur complement, preconditioners, recursion Introduction In this paper we consider the solution of a Hermitian positive definite block-toeplitz (BT) system with small size blocks A n,m X = B () where X and B are mn-by-m matrices and A A A n A n,m = A A A A n A A where each A j is an m-by-m matrix with A j = A j and m is much smaller than n Here denotes the conjugate transpose This kind of linear systems arises from many applications such as the multichannel least squares filtering in time series [33], signal and image processing [27] and queueing system [5] We will discuss these applications, in particular the least squares filtering problems and queueing networks in Section 5 Recent research on using the preconditioned conjugate gradient method as an iterative method for solving n-by-n Toeplitz systems has brought much attention One Advanced Modeling and Applied Computing Laboratory and Department of Mathematics, The University of Hong Kong, Pokfulam Road, Hong Kong wkc@mathshkuhk The research of this author is supported in part by Hong Kong Research Grants Council Grant No HKU 726/2P and HKU CRCG Grant Nos 24436, 255 Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong mng@mathhkbueduhk The research of this author is supported in part by Hong Kong Research Grants Council Grant Nos 746/3P, 735/4P, 735/5P and HKBU FRGs Advanced Modeling and Applied Computing Laboratory and Department of Mathematics, The University of Hong Kong, Pokfulam Road, Hong Kong The research of this author is supported in part by the grant of the project of oil resources in pre-cenozoic layer in Gulf of Bohai No KZCXSW- 8, and Guangdong Natural Science Foundation of Grant No 32475
2 of the main important results of this methodology is that the complexity of solving a large class of Toeplitz systems can be reduced to O(n log n) operations provided that a suitable preconditioner is chosen under certain conditions on the Toeplitz matrix [] Circulant preconditioners [5, 6,, 3, 4, 24, 32, 37, 4], banded-toeplitz preconditioners [7] and multigrid methods [8, 7] have been proposed and analyzed In these papers, the diagonals of the Toeplitz matrix are assumed to be the Fourier coefficients of a certain generating function In the literature, there are some papers [25, 28, 29, 34, 35, 36, 38] discussing about iterative block-toeplitz solvers In [28, 35, 36], they considered n-by-n block Toeplitz matrices with m-by-m blocks generated by a Hermitian matrix-valued generating function and analyzed the associated problem of preconditioning using preconditioners generated a nonnegative definite, not essentially singular, matrix-valued functions In [25, 29, 34], they considered block-toeplitz-toeplitz-block matrices and studied block band-toeplitz preconditioners In [38], multigrid methods were applied to solving block-toeplitz-toeplitz-block systems In the above methods, the underlying generating functions are assumed to be known in order to construct the preconditioners In this paper, we also consider block-toeplitz matrices A n,m generated by a matrix-valued function: F m (θ) = [f u,v (θ)] u,v m, where f u,v (θ) are 2π-periodic functions Under this assumption, the block A j of A n,m is given by A j = π F m (θ)e ijθ dθ 2π π When F m (θ) is nonnegative definite and is not essentially singular, the associated block-toeplitz matrix A n,m is positive definite [28, 35] For such block-toeplitz matrices, Serra [35] has investigated block-toeplitz preconditioners and studied the spectral property of these preconditioned matrices He proved that if the block-toeplitz preconditioner is generated by G m (θ), the generalized Rayleigh quotient, related to matrix functions F m (θ) and G m (θ) is contained in a set of the form (c, c 2 ) with < c and c 2 <, then the preconditioned conjugate gradient (PCG) method requires only a constant number of iterations in order to solve, within a preassigned accuracy, the given block-toeplitz system In [3], Ng et al have proposed to use recursive-based PCG methods for solving Toeplitz systems The idea is to use a principal submatrix of a Toeplitz matrix as a preconditioner The inverse of the preconditioner can be constructed recursively by using the Gohberg-Semencul formula They have shown that this method is competitive with the method of circulant preconditioners In this paper, we study block diagonal and Schur complement preconditioners for block-toeplitz systems We note that there is a natural partitioning of the block-toeplitz matrix in 2-by-2 blocks as follows: ( ) A (,) A A n,m = (,2) A (2,) A (2,2) (2) Here A (,) and A (2,2) are the principal submatrices of A n,m They are also block- Toeplitz matrices generated by the same generating function of A n,m Therefore it is 2
3 natural and important to examine if the corresponding system ( ) ( ) ( ) A (,) A (,2) X B A (2,) A (2,2) = X 2 B 2 (3) can be solved efficiently by exploiting this partitioning Here we consider preconditioning A n,m by a block diagonal matrix ( ) A (,) B n,m = A (2,2) Since both A (,) and A (2,2) are block-toeplitz matrices generated by the same generating function F m (θ), we particularly consider B n,m in the following form: ( ) An/2,m B n,m = A n/2,m Here without loss of generality, we may assume n is even We note that if A n,m is positive definite, then B n,m is also positive definite and the eigenvalues of the preconditioned matrix Bn,mA n,m lie in the interval (, 2) On the other hand, the Schur complement arises when we use a block factorization of (2) The linear system (3) becomes ( ) ( ) ( ) ( ) I A (,) A (,2) X B A (2,) (A (,) ) =, (4) I S n,m X 2 B 2 where S n,m = A (2,2) A (2,) (A (,) ) A (,2) We see that the method requires the formation of the Schur complement matrix Therefore we consider approximating S n,m by A (2,2) = A n/2,m and study the preconditioner of the form: ( ) ( ) I A (,) A C n,m = (,2) A (2,) (A (,) ) I A (2,2) ( A (,) A = (,2) A (2,) A (2,2) + A (2,) (A (,) ) A (,2) ) (5) We note that if A n,m is positive definite, then C n,m is also positive definite and the eigenvalues of the preconditioned matrix Cn,m A n,m are inside of the interval (, ] In particular, there are at least mn/2 eigenvalues of the preconditioned matrix being equal to one The main results of this paper is that if the generating function F m (θ) is Hermitian positive definite, and is spectrally equivalent to G m (θ) = [g u,v ] u,v m where g u,v are trigonometric polynomials, then the spectra of the preconditioned matrices Bn,m A n,m and Cn,m A n,m are uniformly bounded except for a fixed number of outliers where the number of outliers depends only on m Hence the conjugate gradient type methods, when applied to solving these preconditioned block-toeplitz systems, converge very quickly, especially when m is small 3
4 We remark the construction of our preconditioners does not require the underlying matrix generating functions However, the inverse of block-toeplitz matrix A (,) is required Using the same idea in [3], we employ Gohberg-Semencul formula to represent the form of the inverse of A (,), and apply a recursive method to construct the inverse of A (,) It is important to note that we do not directly use the Gohberg- Semencul formula to generate the solution of the original block-toeplitz system We remark that the solution results are not accurate when the block-toeplitz matrices are ill-conditioned Indeed we use the Gohberg-Semencul formula to generate a preconditioner, and then use the preconditioned conjugate gradient method with this preconditioner to compute the solution of the original system iteratively The outline of this paper is as follows In 2, we analyze the spectra of the preconditioned matrices In 3, we describe the recursive algorithms for block diagonal and Schur complement preconditioners Numerical results are given in 4 to illustrate the effectiveness of our approach Finally, concluding remarks are given in 5 to address further research issues 2 Analysis of Preconditioners In this section, we analyze the spectra of the preconditioned matrices Bm,n A n,m and Cn,m A n,m We first note that since A n,m is positive definite, we have the following results given in [, pp ] Lemma 2 Let x and y be mn/2-vector Define γ = sup x,y x A () n/2,m y x A n/2,m x y A n/2,m y If A n,m is Hermitian and positive definite, then γ < In particular, we have γ 2 = sup y y A 2, n/2,m A n/2,m A,2 y A n/2,m y n/2,m y Using Lemma 2 and the assumption that A n,m is Hermitian and positive definite, we have the following results The eigenvalues of the preconditioned matrix Bn,m A n,m lie inside the interval (, 2) Also if µ is an eigenvalue of Bn,m A n,m, then 2 µ is also an eigenvalue of Bn,mA n,m The eigenvalues of A n,m are inside the interval (, ] Moreover, at least mn/2 eigenvalues of Cn,mA n,m are equal to We then show that the eigenvalues of Bn,m A n,m and Cn,m A n,m are uniformly bounded except for a fixed number of outliers for some generation functions F m (θ) We first let E n (θ) = [e u,v (θ)] u,v n where e u,v (θ) = e i(u v)θ The block-toeplitz matrix A n,m can be expressed in terms of its generating function: A n,m = 2π π π π E n (θ) F m (θ)dθ (2) Similarly, the block diagonal preconditioner can be expressed as follows: B n,m = π ( ) En/2 (θ) F 2π E n/2 (θ) m (θ)dθ (22) 4
5 We note that there exists a permutation matrix P n,m such that Pn,mA t n,m P n,m = Ãn,m = π F m (θ) E n (θ)dθ 2π π and Pn,mB t n,m P n,m = B n,m = π ( En/2 (θ) F m (θ) 2π π E n/2 (θ) ) dθ It is clear that Ãn,m and B n,m are Toeplitz-block (TB) matrices, and the spectra of A n,m and Ãn,m, and B n,m and B n,m are the same Since the spectra of Bn,mA n,m and B n,mãn,m are the same, it suffices to study the spectral properties of B n,mãn,m We give the following two lemmas Lemma 22 Let A = [a i,j ] i,j n and B = [b i,j ] i,j m Then for any m-by-n matrices X = (x,x 2,,x n ) and Y = (y,y 2,,y n ), we have vec(x) (A B)vec(Y ) = n u= v= n a u,v x u By v (23) where vec(x) = x x 2 x n and vec(y ) = y y 2 y n Lemma 23 Let x = (x,x 2,,x m ) with x l = (x (l )n+, x (l )n+2,, x ln ) t ( l m), and p (θ) = (ˇp (θ), ˇp 2 (θ),, ˇp m (θ)) t and p 2 (θ) = (ˇp 2 (θ), ˇp 22 (θ),, ˇp m2 (θ)) t with n n ˇp j (θ) = x (j )n+l e i(l )θ and ˇp j2 (θ) = e in θ x (j )n+n +le i(l )θ n l= If A n,m is generated by F m (θ), then we have and x t Bn,m x = π [p (θ) t F m (θ)p (θ) + p 2 (θ) t F m (θ)p 2 (θ)]dθ (24) 2π π x t à n,m x = x t Bn,m x + 2π π π l= [p (θ) t F m (θ)p 2 (θ) + p 2 (θ) t F m (θ)p (θ)]dθ (25) 5
6 Proof We construct X = (x t,xt 2,,xt m ), ie, x = vec(x) Using Lemma 22, we obtain and vec(x) Ã n,m vec(x) = π 2π vec(x) Bn,m vec(x) = π 2π We note that ( x En/2 (θ) u E n/2 (θ) n/2 n/2 x (u )n+j = j= l= π u= v= ) x v x (v )n+l e jl (θ) + π u= v= f u,v (θ)x u n j=n/2+ n/2 n/2 = x (u )n+j e i(j ) x (v )n+l e i(l ) + j= n j=n/2+ l= x (u )n+j e i(j ) = ˇp u (θ)ˇp v (θ) + ˇp u2 (θ)ˇp v2 (θ) n l=n/2+ f u,v (θ)x ue n (θ)x v dθ (26) ( En/2 (θ) E n/2 (θ) x (u )n+j x (v )n+l e i(l ) n l=n/2+ ) x v dθ (27) x (v )n+l e jl (θ) By using (27), one can obtain (24) directly Similarly by using (26), (25) can also be derived Next we show that the eigenvalues of Bn,mA n,m are uniformly bounded except for a fixed number of outliers when F m (θ) is Hermitian positive definite, and is spectrally equivalent to G m (θ) = [g u,v ] u,v m where g u,v are trigonometric polynomials We remark that the fixed number of outliers depends on m Theorem 24 Let F m (θ) be Hermitian positive definite Suppose F m (θ) is spectrally equivalent to G m (θ) = [g u,v ] u,v m where g u,v are trigonometric polynomials and s is the largest degree of the polynomials in G m (θ) Then there exist two positive numbers α and β (α < β) independent of n such that for all n > 2s (s = s/2 ), at most 2ms eigenvalues of B n,mãn,m (or Bn,mA n,m ) are outside the interval [α, β] Proof We note that there exist positive numbers γ and γ 2 such that < γ x F m (θ)x x G m (θ)x γ 2, x IR mn, θ [, 2π] (28) We define the following two sets Υ and Ω as follows: Υ = {r : r = jn+n/2 s, jn+n/2 s +,, jn+n/2+s for j =,,, m } and Ω = {z = (z, z 2,,z mn ) t z k = for k Υ} 6
7 We note that Ω is a (mn 2ms )-dimensional subspace in IR mn It follows that for x Ω and p u (θ) (u =, 2) defined in Lemma 23, we have π π p (θ) t G m (θ)p 2 (θ)dθ = ˇp u (θ)f u,v (θ))ˇp v2 (θ)dθ and = = = = π π π u= v= π u= v= n/2 s π j= π π π u= v= n/2 n/2 f u,v (θ)e in/2θ x (u )n+j e i(j )θ x (v )n+n/2+j e i(j )θ dθ π j= n/2 s f u,v (θ)e i(2s +)θ j= j= x (u )n+j e ijθ x (v )n+j+s e i(n/2 s +j)θ dθ = (29) p 2 (θ) t G m (θ)p (θ)dθ = π u= v= π u= v= n/2 s l= π π u= v= ˇp u2 (θ)(θ)f u,v (θ))ˇp v (θ)(θ)dθ n/2 n/2 f u,v (θ)e n/2θ x (u )n+n/2+j e i(j )θ x (v )n+l e i(l )θ dθ π j= n/2 s f u,v (θ)e i(2s +)θ j= l= x (u )n+n/2+s +je i(n/2 s +j)θ x (v )n+l e ijθ dθ = (2) Since F m (θ) γ G m (θ) is positive semi-definite, we have π π π π p (θ) t [F m γ G m (θ)](θ)p (θ) + p 2 (θ) t [F m (θ) γ G m (θ)]p 2 (θ)dθ p (θ) t [F m γ G m (θ)](θ)p 2 (θ) + p 2 (θ) t [F m (θ) γ G m (θ)]p (θ)dθ By using Lemma 23, (29), (2) and (2), we get x t Tn,m x x t Bn,m x x t B n,m x = π π (p (θ)f m (θ)p 2 (θ) + p 2 (θ)f m (θ)(p (θ)dθ π π (p (θ)f m (θ)p (θ) + p 2 (θ))f m (θ)(p 2 (θ)dθ π π p (θ)[f m (θ) γ G m (θ)]p 2 (θ) + p 2 (θ)[f m (θ) γ G m (θ)]p (θ)dθ = π π p (θ)f m (θ)p (θ) + p 2 (θ)f m (θ)p 2 (θ)dθ π π p (θ)[f m (θ) γ G m (θ)]p (θ) + p 2 (θ)[f m (θ) γ G m (θ)]p 2 (θ)dθ π π p (θ)f m (θ)p (θ) + p 2 (θ)f m (θ)p 2 (θ)dθ γ γ 2 x Ω 7 (2)
8 Therefore, we have α = γ γ 2 xt à n,m x x t B n,m x 2 γ γ 2 = β, x Ω It implies that there are at most 2ms eigenvalues of B n,mãn,m outside the interval [α, β] In [35], Serra explicitly constructed G m (θ) by using eigen-decomposition of F m (θ): F m (θ) = Q(θ) Λ(θ)Q(θ) where Λ(θ) is a diagonal matrix containing the eigenvalues λ j (F m (θ)) (j =,,m) of F m (θ) Suppose λ j (F m (θ)) has a zero at θ j of even order ν j Then G m (θ) is constructed in the following way: G m (θ) = Q(θ j ) Γ(θ)Q(θ j ), where Γ(θ) is a diagonal matrix with { (2 2 cos(θ)) ν j/2, k = j [Γ(θ)] kk =, otherwise j= It is clear that each entry of G m (θ) is a polynomial The largest degree of the polynomials in G m (θ) depends on the orders of the zeros of the eigenvalues of F m (θ) It has been shown that F m (θ) is spectrally equivalent to G m (θ), see for instance [35] Similarly, we one show the eigenvalues of Cn,mA n,m are uniformly bounded except for a fixed number of outliers, where this fixed number depends on m Theorem 25 Let F m (θ) be Hermitian positive definite Suppose F m (θ) is spectrally equivalent to G m (θ) = [g u,v ] u,v m where g u,v are trigonometric polynomials and s is the largest degree of the polynomials in G m (θ) Then exist two positive numbers α and β (α < β) independent of n such that for all n > 2s (s = s/2 ), at most ms eigenvalues of C n,mãn,m (or Cn,m A n,m) are outside the interval [α, β] Proof According to Theorems 22 and 23, when the eigenvalues of Bn,mA n,m are equal to λ, the eigenvalues of Cn,mA n,m are given by λ 2 Using Theorem 24, we can find two positive numbers α = (γ /γ 2 ) 2 and β = such that the result holds 3 Recursive Computation of B n,m and C n,m In the previous section, we have shown that both B n,m and C n,m are good preconditioners for A n,m However, the inverses of B n,m and C n,m involves the inverse of A n/2,m The computational cost is still expensive In this section, we present a recursive method to construct the preconditioners B n,m and C n,m efficiently We remark that the inverse of a Toeplitz matrix can be reconstructed by a low number of columns Gohberg and Semencul [8] and Trench [39] showed that if the (,)th entry of the inverse of a Toeplitz matrix is nonzero, then the first and last columns of the inverse of the Toeplitz matrix are sufficient for this purpose In [39] a recursion formula was given in [8] A nice matrix representation of the inverse wellknown as Gohberg-Semencul formula was presented In [2], an inversion formula was exhibited which works for every nonsingular Toeplitz matrix and uses the solutions of two equations, (the so-called fundamental equations,) where the right-hand-side of 8
9 one of them is a shifted column of the Toeplitz matrix Later Ben-Artzi and Shalom [2], Labahn and Shalom [26], Ng et al [3] and Heinig [2] studied the representation when the (,)th entry of the inverse of a Toeplitz matrix is zero In [3], we have used the matrix representation of the inverse of a Toeplitz matrix to construct effective preconditioners for Toeplitz matrices For block-toeplitz matrices, Gohberg and Heinig [9] also extended the Gohberg- Semencul formula to handle this case It was shown that if A n,m is nonsingular and the following equations are solvable: A n,m = E (n) and A n,m = F (n) (3) with = 2 n, = 2 n, E(n) = I m, F (n) = I m Here j and V n (n) and j are m-by-m matrices and I m is the identity matrix Assume that are nonsingular, the inverse of A n,m can be expressed as follows: A n,m = Ψ n,m W n,m Ψ n,m Φ n,m Z n,m Φ n,m (32) where Ψ n,m and Φ n,m are mn-by-mn lower triangular block Toeplitz matrices given respectively by Ψ n,m = 2 2 n U n (n) n 2 and Φ n,m = n 2 n n 2 Moreover, W n,m and Z n,m are block-diagonal matrices: W n,m = ( ) ( ) ( ) 9
10 Z n,m = (V n (n) ) (V n (n) ) (V n (n) ) For the preconditioners B n,m and C n,m, the inverse of A n/2,m can be represented by the formula in (32) This formula can be obtained by solving the following two linear systems: A n/2,m U (n/2) = E (n/2) and A n/2,m V (n/2) = F (n/2) These two systems can be solved efficiently by using the preconditioned conjugate gradient (PCG) method with B n/2,m or C n/2,m as preconditioners The inverse of A n/4,m involved in the preconditioners B n/2,m and C n/2,m can be recursively generated by using (32) until the size of the linear system is sufficiently small The procedures of recursive computation of B n,m and C n,m are described as follows: Procedure Input(A n,m,n) Output(, ) If k N, then solve two linear systems A k,m U (k) = E (k) and A k,m V (k) = F (k) else exactly by direct methods; compute U (k/2) and V (k/2) by calling the procedure with the input matrix A k/2,m and the integer k/2; construct A k/2,m by using the output U(k/2) and V (k/2) via the formula in (32); solve the two linear systems A k,m U (k) = E (k) and A k,m V (k) = F (k) by using the preconditioned conjugate gradient method with B k,m (or C k,m ) as the preconditioner We remark that if each block of the block-toeplitz matrix A n,m is Hermitian, then we only need to solve one linear system A n,m = E (n) in order to represent the inverse of the block-toeplitz matrix In this case, the solution can be obtained by using : = n n 3 Computational Cost The main computational cost of the method comes from the matrix-vector multiplications A n,m X, Bn,m X (or C n,mx) in each PCG iteration, where X is an mn-by-m vector We note that A n,m X can be computed in 2m 2n-length FFTs by first embedding A n,m into a 2mn-by-2mn block-circulant matrix and then carrying out the multiplication by using the decomposition of the
11 block-circulant matrix Let S n,m be the circulant matrix with m-by-m matrix block element, one can find a permutation matrix P n,m such that S n,m = (S i,j ) m m = P t n,m W n,mp n,m is a circulant-block matrix, where S i,j be n-by-n circulant matrix Let S i,j (:, ) denotes the first column of the matrix S i,j, it is known that S i,j can be diagonalized in n log n length FFT, ie S i,j = F Λ i,j F, where F and F are the Fourier transform matrix and the inverse Fourier transform matrix, and Λ i,j = diag(f S i,j (:, )) Thus we obtain S n,m = (I F ) = (I F )P t DP(I F) Λ Λ Λ m Λ 2 Λ 22 Λ 2m Λ m Λ m2 Λ mm (I F) where D = diag(d, D 2,, D n ) is a block diagonal matrix, and [D k ] ij = [Λ ij ] kk, ie, the (i, j)th entry of D k is equal to the (k, k)th entry of Λ ij Therefore, the block-circulant matrix-vector multiplication can be obtained by S n,m X = P(I F )P t DP(I F)P t X We note that it requires O(m 2 n log n) operations to compute the block diagonal matrix D, the block diagonal matrix-vector multiplication requires O(m 3 n) operations Thus the overall multiplication requires O(m 2 n logn+m 3 n) For the preconditioner B n,m or C n,m, we need to compute matrix-vector products A n/2,my, where Y is an mn/2-bym vector According to (32), the inverse of a block-toeplitz matrix can be written as the product of lower-triangular block-toeplitz matrices Therefore, the matrix-vector multiplication A n/2,my can be computed by using FFTs by embedding such lowertriangular block-toeplitz matrices into a block-circulant matrices Such matrix-vector multiplication requires O(m 2 n log n + m 3 n) operations Now we estimate the total cost of recursive computation for solving two linear systems A n,m = E (n) and A n,m = F (n) For simplicity, we assume n = 2 l Suppose the number of iterations required for convergence in solving the two mn j -by-mn j linear systems A nj,mu (nj) = E (nj) and A nj,mv (nj) = F (nj) where n j = 2 ν j+ are given by c j for j =,,L We note that the smallest size of the system is equal to N = n/2 ν L Therefore the total cost of the recursive computations of B n,m (or C n,m ) is about L j= c jf j where f j denotes the cost of each PCG iteration where the size of the system is n j Since the cost of a n j -length FFT is roughly twice the cost of a n j m/2-length FFT, and the cost of each PCG iteration is O(m 2 n j log n j +m 3 n j ) operations Hence the total cost of the recursive computation is roughly bounded by O(max j {c j (m 2 n log n + m 3 n)}) Next, we compute the operations required for the circulant preconditioners For the block-circulant matrix S n,m, the solution of S n,m Z = B can be obtain by: Z = S n,m B = P(I F )P t D P(I F)P t B
12 n B C M Table 4 Number of iterations required for convergence In order to compute the inverse of D, O(m 2 n logn + m 3 n) operations are required Moreover, the matrix-vector multiplication requires O(m 3 n) operations, thus Sn,mB can be computed in O(m 2 n log n + m 3 n) operations, which is the same complexity of our proposed method In next section, we show that our proposed method is competitive to circulant preconditioners 4 Numerical Results In this section, we test our proposed method The initial guess is the zero vector The stopping criteria is r q 2 / r 2 7, where r q is the residual vector at the q-the iteration of the PCG method We use Matlab 6 to conduct the numerical tests We remark that our preconditioners are constructed recursively For instance, when we solve A 256,m U (256) = E (256), the preconditioners are constructed by solving A 28,m U (28) = E (28) and A 64,m U (64) = E (64) using the preconditioned conjugate gradient method with the stopping criteria being τ, and using the direct solver for A 32,m U (32) = E (32) In all the tests, the coarsest level is set to be n = 32 In the first test, we consider the following example of generating function [35]: ( 2 sin 2 ) (θ/2) θ 5/2 θ 5/2 2 sin 2 (θ/2) Tables 4 shows the corresponding numbers of iterations required for the convergence using our proposed preconditioners B and C As a comparison, the numbers of iterations by using the preconditioner M studied in [35] are also listed Our proposed preconditioners are competitive to the preconditioner studied in [35] We also remark that the construction of our proposed preconditioners does not require the knowledge of the underlying matrix generating function of block-toeplitz matrices In the second test, we consider the following four examples Example : Example 2: Example 3: F 3 (θ) = F 3 (θ) = 2θ 4 + θ 3 θ 4 θ 3 3θ 4 + θ θ 4 θ 2θ 4 + F 2 (θ) = θ 4 + θ 3 θ θ 3 2θ 4 + θ 2 θ θ 2 5 θ ( ) 8θ 2 (sin θ) 4 (sin θ) 4 8θ 2 2
13 Example 4: F 3 (θ) = (sin θ) 4 θ 2 (sin θ) 8 θ (sin θ)4 (sin θ) 8 θ 4 These generating functions are Hermitian matrix-valued functions Also the generated block-toeplitz matrices are positive definite In Example, the generated block-toeplitz matrices are well-conditioned For Examples 2-5, the generating functions are singular at some points and therefore the corresponding block-toeplitz matrices are ill-conditioned In Tables 42 45, we give the number of iterations required for convergence by using B n,m and C n,m as the preconditioners Here we set τ = 7 in the recursive calculation of the preconditioner and the maximum number of iterations to be If the method does not converge within iterations, we specify > in the tables According to Tables 42 45, we see that the number of iterations for the non-preconditioned systems (the column I ) increases when the size n increases However, the number of iterations for the preconditioned systems (the columns B and C ) decreases or almost keep constant when the size n increases in Examples 3 The performance of Schur complement preconditioners C is generally better than that of block diagonal preconditioners B We also compare our preconditioners with block-circulant preconditioners, the columns S and T are the number of iterations required for the Strang and the T Chan block-circulant preconditioners respectively We note that the Strang block-circulant preconditioner may not be positive definite for the ill-conditioned matrix Indeed there are several negative eigenvalues of the Strang block-circulant preconditioners in Examples 3 and 4 Even the Strang circulant preconditioned system converges, the solution may not be correct We also see from Tables 44 and 45 that the T Chan block-circulant preconditioner does not work In [, 3], Chan et al have constructed best circulant preconditioners by approximating the generating function with the convolution product that matches the zeros of the generating function They showed that these circulant preconditioners are effective for ill-conditioned Toeplitz matrices Here we also construct such best block-circulant preconditioners (the column K i and i refers to the order of the kernel that we used) and test their performance We note from Tables that our proposed preconditioners perform quite well For Example 4, the method with best block-circulant preconditioners do not converge within iterations Also we report the computational times required for convergence in Examples 4 in Tables respectively If the number of iterations is more than, we specify in the tables We see that the computational times required by the block diagonal preconditioner and the Schur complement preconditioner are less than those of the block-circulant preconditioners especially when n is large We also note from the tables that the performance of the Schur complement preconditioner is better than that of the block diagonal preconditioner To illustrate the fast convergence of the proposed method, in Table 4, we calculate the number of eigenvalues within the small interval for n = 28 in Examples 4 We find that the spectra of the preconditioned matrices Cn,m A n,m and Bn,m A n,m are more close to than those of circulant preconditioners and no preconditioner Finally, we report that the numbers of iterations are about the same even when the stopping criteria τ of the preconditioned conjugate gradient method at each level 3
14 n I B C S T K 4 K 6 K Table 42 Number of iterations required for convergence in Example n I B C S T K 4 K 6 K > Table 43 Number of iterations required for convergence in Example 2 n I B C S T K 4 K 6 K > > 5 > > 5 > > 6 > Table 44 Number of iterations required for convergence in Example 3 n I B C S T K 4 K 6 K > 22 > > 2 > > > > > 256 > 24 > > > > > 52 > 3 3 > > > > > 24 > 36 6 > > > > > 248 > > > > > > 496 > > > > > > Table 45 Number of iterations required for convergence in Example 4 in the recursive calculation of the proposed preconditioners is 3, 4 and 7 for the proposed preconditioners Next we consider two applications to block-toeplitz systems arising from multichannel least squares filtering and queueing networks Application I: Multichannel least squares filtering is a data processing method that 4
15 n I B C S T K 4 K 6 K Table 46 Computational times required for convergence in Example n I B C S T K 4 K 6 K ** Table 47 Computational times required for convergence in Example 2 n I B C S T K 4 K 6 K ** ** ** ** ** ** ** Table 48 Computational times required for convergence in Example 3 n I B C S T K 4 K 6 K ** 36 ** ** ** ** ** ** ** 256 ** ** ** ** ** ** 52 ** ** ** ** ** ** 24 ** ** ** ** ** ** 248 ** ** ** ** ** ** 496 ** ** ** ** ** ** Table 49 Computational times required for convergence in Example 4 makes use of the signals from each of m channels We represent this multichannel data by x t, where x t is a column vector whose elements are the signals from each channel Since we are interested in digital processing methods, we suppose that the signals are sampled at discrete, equally spaced time points which are represented by the time index t Without loss of generality, we require that t takes on successive integer values If we let x it represents the signal coming from the ith channel (i =, 2,, m), the 5
16 I B C S T K 4 K 6 K 8 Example 26% 9427% 9844% 7734% 399% 9323% 8776% 778% Example 2 443% 9479% 9844% 849% 4662% 9427% 94% 856% Example 3 % 953% 9922% 8984% 5664% 8438% 864% 789% Example 4 52% 9323% 988% 877% 53% 7995% 7344% 75% Table 4 The percentages of the number of eigenvalues within the interval of [99,] for n = 28 multichannel signal can be written as x t = (x,t, x 2,t,, x m,t ) T The filter is represented by the coefficients S, S 2,,S n where each coefficient S k (k =, 2,, n) is an n-by-m matrix The multichannel signal x t received by the array system represents the input to the filter and the resulting output of the filter is a multichannel signal, which we denote by the column vector y t = (y,t, x 2,t,, x m,t ) T The relationship between input x t and output y t is given by the convolution formula y t = S x t + S 2 x t + + S n x t n+ The determination of the filter coefficients is based on the concept of a desired output denoted by a column vector z t = (z,t, z 2,t,, z m,t ) T On each channel (i =, 2,, m), there will be an error between the desired output z t and the actual output y t The mean square value of this error is given by E[(z t y t ) 2 ] The sum of the mean square errors for all the channels is E[(z t y t ) 2 ] i= The least squares determination of the filter coefficients requires that this sum is minimum This minimization leads to a set of linear equations R R R n S G R R R n 2 S 2 = G 2, (4) R n R n 2 R S n G n where R j = E[x t x T t j ] and G j = E[z t x T t j+ ] 6
17 x x 2 x x 29 x x 258 x 28 x 256 x 384 Fig 4 Color image and data vectors Here R j is an m-by-m matrix and is the autocorrelation coefficients of the input signal x t and G j is an n-by-m matrix and is the cross-correlation coefficients between the desired output z t and the input signal x t In the test, a 28-by-28 color image is used to generate the data points We consider the pixel value of the color image to be x t (t =, 2,, 28 2 ), see Figure 4 We remark that color can be regarded as a set of three images in their primary color components: red, green and blue In the least squares filtering, there are three channels, ie, m = 3 Our task is to generate the multichannel least squares filters such that the sum of the mean square errors for all the channels E{x t+ [S x t + S 2 x t + + S n x t n+ ] 2 } i= is minimum Such least squares filters have been commonly used in color image processing for coding and enhancement [27] Table 4 shows the number of iterations required for convergence Notice that the generating function of the block-toeplitz matrices are unknown in this case However, the construction of the proposed preconditioners only requires the entries of A n,m and does not require the explicit knowledge of the generating function F m (θ) of A n,m We find that the generated block-toeplitz matrices are very ill-conditioned Therefore, the number of iterations required for convergence without preconditioning is very large, but the performance of the preconditioners B n,m and C n,m are very good We also generate more synthetic multichannel data sets to test the performance of our proposed method for larger m Table 42 shows the number of iterations required for convergence The results show that our proposed preconditioner performs quite well Application II: We next apply the preconditioning method to solve the steadystate probability distribution of a Markovian overflow queueing network with batch arrivals of customers This is a non-symmetric problem It is the 2-queue overflow network considered in [2, 5] The network consists of two queues with exogenous Poisson batch arrivals and exponential servers Whenever queue 2 is full, the arriving 7
18 n m I B C S T K 4 K 6 K > > > 6 3 > 42 > > > > 85 4 > > > > > 52 3 > > > > > > 24 3 > 5 > > > > > Table 4 Number of iterations required for convergence n m I B C S T K 4 K 6 K > > 39 8 > > 5 25 > 44 > > > > 42 2 > > > > 7 35 > 68 > > > > > 5 25 > > 64 3 > > 28 2 > 3 47 > > > > > Table 42 Number of iterations required for convergence customers will overflow to queue if it is not yet full Otherwise the customers will be blocked and lost, see Figure 42 µ s = Queue λ Overflow µ 2 s 2 = Queue 2 λ 2 Fig 42 The two-queue overflow network For queue i (i =, 2), let λ i be the exogenous input rate, µ i the service rate, s i the number of servers, and n i s i the buffer sizes For simplicity, we let the batch 8
19 size k of the arrivals follows the following distribution: Prob(k = ) = p i and Prob(k = 2) = p i where < p i < We assume that when the batch size is 2 and there is only one waiting space left, the queueing network still accepts one of the two customers Then the generator matrix for the queueing system is given by where Q i = K = Q I n2 + I n Q 2 + diag(,,, ) R, (42) λ i µ i p i λ i λ i + µ i µ i ( p i )λ i ( p i )λ i p i λ i λ i + µ i µ i ( p i )λ i p i λ i λ i + µ i µ i ( p i )λ i λ i µ i (43) λ 2 p 2 λ 2 λ 2 R = ( p 2 )λ 2 p 2 λ 2 λ2 (44) ( p 2 )λ 2 λ 2 and I ni is the identity matrix of size l i For the case of p i =, the batch size is one, cosine-based preconditioners have been constructed to solve the queueing problem, see for instance [2, 5] However, cosine-based preconditioners is not applicable when the input is a batch arrival Poisson process Since the generator matrix is singular, we consider an equivalent linear system (K + ee t )p = e (45) Here e = (,,,, ) t and the (K + ee t ) in (45) is irreducibly diagonal dominant and hence invertible The steady-state distribution vector p is obtained by normalizing p We note that the matrix (K+ee t ) can be expressed as the sum of a block-toeplitz matrix, A = Q I n2 + I n Q 2 + diag(µ,,,, λ ) I n2 and a low-rank matrix I n2 ( L = µ I n2 ) + ( λ I n2 + R + diag(,,, ) ) I n2 9
20 To solve (45), we can use the Sherman-Morrison-Woodbury formula for the matrix A + L to solve for the vector p Since A is a block-toeplitz matrix, our proposed method can be applied Although A is not symmetric, the representation of the inverse of a nonsymmetric block-toeplitz matrix is still defined (see the appendix) Table 43 shows the number of iterations required for convergence using the proposed method We remark that the Strang preconditioner is singular in this application We see that performance of our preconditioners is efficient p n = m I B C T Table 43 Number of Iterations required for the overflow queueing networks (n = n 2 = n and p = p 2 = p) 5 Concluding Remarks In this paper, we propose block diagonal and Schur complement preconditioners for block-toeplitz matrices We have proved that for some block-toeplitz coefficient matrices, the spectra of the preconditioned matrices are uniformly bounded except for a fixed number of outliers, where the number of outliers depends on m Therefore conjugate gradient method will converge very quickly when applied to solving the preconditioned systems, especially when m is small Applications to block-toeplitz systems arising from least squares filtering problems and queueing networks were discussed The method can also be applied to solve other non-symmetric problems arise in other queueing systems [5] Acknowledgements: The authors are very much indebted to the referees for their valuable comments and suggestions which greatly improved this paper REFERENCES [] O Axelsson, Iterative Solution Methods, Cambridge University Press, New York, 994 [2] A Ben-Artzi and T Shalom, On inversion of Toeplitz and close to Toeplitz matrices, Linear Algebra Appl, 75 (986), [3] F Di Benedetto, Analysis of preconditioning techniques for ill-conditioned Toeplitz matrices, SIAM J Sci Comput, 6 (995), pp [4] A Ben-Artzi and T Shalom, On Inversion of Block Toeplitz Matrices, Integral Equations and Operator Theory, 8 (985), pp [5] F Di Benedetto and S Serra, A Unifying Approach to Abstract Matrix Algebra Preconditioning, Numer Math, 82 (999), pp 57 9 [6] R Chan, Circulant Preconditioners for Hermitian Toeplitz Systems, SIAM J Matrix Anal Appl, (989), pp [7] R Chan, Toeplitz Preconditioners for Toeplitz Systems with Nonnegative Generating Functions, IMA J Numer Anal, (99), pp [8] R Chan, Q Chang and H Sun, Multigrid Method for Ill-Conditioned Symmetric Toeplitz Systems, SIAM J Sci Comput, 9 (998), pp [9] R Chan and W Ching, Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Network with Batch Arrivals SIAM J Sci Comp,7 (996), pp
21 [] R Chan and M Ng, Conjugate Gradient Methods for Toeplitz Systems, SIAM Review, 38 (996), pp [] R Chan, M Ng and A Yip, The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case, Numer Math, 92 (22), pp 7 4 [2] R Chan, C Wong and W Ching Optimal Trigonometric Preconditioners for Elliptic Problems and Queueing Problems, SEA Bull Math, 3 (996), pp 7 24 [3] R Chan, A Yip and M Ng, The Best Circulant Preconditioners for Hermitian Toeplitz Systems, SIAM J Numer Anal, 38 (2), pp [4] T Chan, An Optimal Circulant Preconditioner for Toeplitz Systems, SIAM J Sci Statist Comput, 9 (988), pp [5] W Ching, Iterative Methods for Queuing and Manufacturing Systems, Springer Mathematics Monograph, Springer, London, 2 [6] J Durbin, The Fitting of Time Series Models, Rev Int Stat, 28 (96), pp [7] G Fiorentino and S Serra, Multigrid Methods for Symmetric Positive Definite Block Toeplitz Matrices with Nonnegative Generating Functions, SIAM J Sci Comp, 7 (996), pp 68 8 [8] I Gohberg and A Semencul, On the Inversion of Finite Toeplitz Matrices and Their Continuous Analogs, Mat Issled, 2 (972), pp [9] I Gohberg and G Heinig, Inversion of Finite-section Toeplitz Matrices Consisting of Elements of Non-commutative Algebra, Rev Roum Math Pures et Appl, 9 (974), pp [2] G Heinig, On the Reconstruction of Toeplitz Matrix Inverses from Columns, Linear Algebra Appl, 35 (22), pp [2] G Heinig and L Rost, Algebraic Methods for Toeplitz-like Matrices and Operators, Birkhäuser Verlag, Basel, 984 [22] S Haykin, Adaptive Filter Theory, Prentice-Hall, London, 99 [23] R Horn and C Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 985 [24] T Huckle, Circulant and Skew Circulant Matrices for Solving Toeplitz Matrix Problems, SIAM J Matrix Anal Appl, 3 (992), pp [25] X Jin, Band-Toeplitz Preconditioners for Block Toeplitz Systems, J Comput Appl Math, 7 (996), pp [26] G Labahn and T Shalom, Inversion of Toeplitz matrices with only two standard equations, Linear Algebra Appl, 75 (992), pp [27] J Lim, Two-Dimensional Signal and Image Processing, Prentice Hall, 99 [28] M Miranda and P Tilli, Asymptotic Spectra of Hermitian Block Toeplitz Matrices and Preconditioning Results, SIAM J Matrix Anal Appl 2 (2), pp [29] M Ng, Band Preconditioners for Block-Toeplitz-Toeplitz-Block Systems, Linear Algebra Appl, 259 (997), pp [3] M Ng, K Rost and Y Wen, On Inversion of Toeplitz Matrices, Linear Algebra Appl, 348 (22), pp 45 5 [3] M Ng, H Sun and X Jin, Recursive-based PCG Methods for Toeplitz Systems with Nonnegative Generating Functions, SIAM Sci Comput, 24, pp [32] D Potts and G Steidl, Preconditioners for Ill-Conditioned Toeplitz Systems Constructed from Positive Kernels, SIAM J Sci Comput, 22, pp [33] M Priestley, Spectral Anslysis and Time Series, Academic Press, 98 [34] S Serra, Preconditioning Strategies for Asymptotically Ill-conditioned Block Toeplitz Systems, BIT, 34 (994), pp [35] S Serra, Spectral and Computational Analysis of Block Toeplitz Matrices Having Nonnegative Definite Matrix-Valued Generating Functions, BIT, 39 (999), pp [36] S Serra, Asymptotic Results on the Spectral of Block Toeplitz Preconditioned Matrices, SIAM J Matrix Anla Appl, 2 (998), pp 3 44 [37] G Strang, A Proposal for Toeplitz Matrix Calculations, Stud Appl Math, 74 (986), pp 7 76 [38] H Sun, X Jin and Q Chang, Convergence of the Multigrid Method for Ill-Conditioned Block Toeplitz Systems, BIT, 4 (2), pp 79 9 [39] W Trench, An Algorithm for the Inversion of Finite Toeplitz Matrices, SIAM J Appl Math, 2 (964), pp [4] E Tyrtyshnikov, Optimal and Superoptimal Circulant Preconditioners, SIAM J Matrix Anal Appl, 3 (992), pp
22 Appendix Let A m,n = A A A n A A A A n A A be a non-hermitian block-toeplitz matrix We consider A A A n  m,n = A A A A n A A and M (n) = A n A 2 A and ˆMn = A n A 2 A Suppose the following matrix equations A n,m = E (n) A n,m = M (n) and  n,m Û (n) = E (n)  n,m ˆ = ˆM (n) have the solutions: = 2 = n 2 n Û (n) = Û (n) Û (n) 2 Û (n) n ˆ = ˆ ˆ 2 ˆ n Then we have A n,m = 2 V n (n) 2 2 U n (n) 2 (Û(n) n ) (Û(n) 2 ) (Û(n) n ) I (ˆV n (n) ) (ˆ 2 ) I (ˆV n (n) ) I 22
Approximate Inverse-Free Preconditioners for Toeplitz Matrices
Approximate Inverse-Free Preconditioners for Toeplitz Matrices You-Wei Wen Wai-Ki Ching Michael K. Ng Abstract In this paper, we propose approximate inverse-free preconditioners for solving Toeplitz systems.
More informationThe Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In
The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed
More informationToeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y
Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y November 4, 994 Abstract The preconditioned conjugate
More informationPreconditioners for ill conditioned (block) Toeplitz systems: facts a
Preconditioners for ill conditioned (block) Toeplitz systems: facts and ideas Department of Informatics, Athens University of Economics and Business, Athens, Greece. Email:pvassal@aueb.gr, pvassal@uoi.gr
More informationOn group inverse of singular Toeplitz matrices
Linear Algebra and its Applications 399 (2005) 109 123 wwwelseviercom/locate/laa On group inverse of singular Toeplitz matrices Yimin Wei a,, Huaian Diao b,1 a Department of Mathematics, Fudan Universit,
More informationSuperlinear convergence for PCG using band plus algebra preconditioners for Toeplitz systems
Computers and Mathematics with Applications 56 (008) 155 170 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa Superlinear
More informationPreconditioners for ill conditioned Toeplitz systems constructed from positive kernels
Preconditioners for ill conditioned Toeplitz systems constructed from positive kernels Daniel Potts Medizinische Universität Lübeck Institut für Mathematik Wallstr. 40 D 23560 Lübeck potts@math.mu-luebeck.de
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationPreconditioning for Nonsymmetry and Time-dependence
Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationAN ALGORITHM FOR COMPUTING FUNDAMENTAL SOLUTIONS OF DIFFERENCE OPERATORS
AN ALGORITHM FOR COMPUTING FUNDAMENTAL SOLUTIONS OF DIFFERENCE OPERATORS HENRIK BRANDÉN, AND PER SUNDQVIST Abstract We propose an FFT-based algorithm for computing fundamental solutions of difference operators
More informationA TANDEM QUEUEING SYSTEM WITH APPLICATIONS TO PRICING STRATEGY. Wai-Ki Ching. Tang Li. Sin-Man Choi. Issic K.C. Leung
Manuscript submitted to AIMS Journals Volume X, Number 0X, XX 00X Website: http://aimsciences.org pp. X XX A TANDEM QUEUEING SYSTEM WITH APPLICATIONS TO PRICING STRATEGY WAI-KI CHING SIN-MAN CHOI TANG
More informationNumerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices
TR-CS-9-1 Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices Michael K Ng and William F Trench July 199 Joint Computer Science Technical Report Series Department of Computer
More informationSchur Complement Matrix And Its (Elementwise) Approximation: A Spectral Analysis Based On GLT Sequences
Schur Complement Matrix And Its (Elementwise) Approximation: A Spectral Analysis Based On GLT Sequences Ali Dorostkar, Maya Neytcheva, and Stefano Serra-Capizzano 2 Department of Information Technology,
More informationPreconditioning of elliptic problems by approximation in the transform domain
TR-CS-97-2 Preconditioning of elliptic problems by approximation in the transform domain Michael K. Ng July 997 Joint Computer Science Technical Report Series Department of Computer Science Faculty of
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationA fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration
NTMSCI 5, No. 2, 277-283 (2017) 277 New Trends in Mathematical Sciences http://dx.doi.org/ A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationEfficient algorithms for finding the minimal polynomials and the
Efficient algorithms for finding the minimal polynomials and the inverses of level- FLS r 1 r -circulant matrices Linyi University Department of mathematics Linyi Shandong 76005 China jzh108@sina.com Abstract:
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationOn solving linear systems arising from Shishkin mesh discretizations
On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationAlgorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices
Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationOn the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationAdvanced Digital Signal Processing -Introduction
Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationb jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60
On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationEfficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems
Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/
More informationON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction
Journal of Computational Mathematics, Vol.6, No.6, 008, 85 837. ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * Tao Tang Department of Mathematics, Hong Kong Baptist
More informationA Review of Preconditioning Techniques for Steady Incompressible Flow
Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationR, 1 i 1,i 2,...,i m n.
SIAM J. MATRIX ANAL. APPL. Vol. 31 No. 3 pp. 1090 1099 c 2009 Society for Industrial and Applied Mathematics FINDING THE LARGEST EIGENVALUE OF A NONNEGATIVE TENSOR MICHAEL NG LIQUN QI AND GUANGLU ZHOU
More informationIyad T. Abu-Jeib and Thomas S. Shores
NEW ZEALAND JOURNAL OF MATHEMATICS Volume 3 (003, 9 ON PROPERTIES OF MATRIX I ( OF SINC METHODS Iyad T. Abu-Jeib and Thomas S. Shores (Received February 00 Abstract. In this paper, we study the determinant
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationSplit Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices
Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hanel Matrices Georg Heinig Dept of Math& CompSci Kuwait University POBox 5969 Safat 13060, Kuwait Karla Rost Faultät für Mathemati
More informationMINIMAL NORMAL AND COMMUTING COMPLETIONS
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND
More informationConstruction of a New Domain Decomposition Method for the Stokes Equations
Construction of a New Domain Decomposition Method for the Stokes Equations Frédéric Nataf 1 and Gerd Rapin 2 1 CMAP, CNRS; UMR7641, Ecole Polytechnique, 91128 Palaiseau Cedex, France 2 Math. Dep., NAM,
More informationTo appear in Monatsh. Math. WHEN IS THE UNION OF TWO UNIT INTERVALS A SELF-SIMILAR SET SATISFYING THE OPEN SET CONDITION? 1.
To appear in Monatsh. Math. WHEN IS THE UNION OF TWO UNIT INTERVALS A SELF-SIMILAR SET SATISFYING THE OPEN SET CONDITION? DE-JUN FENG, SU HUA, AND YUAN JI Abstract. Let U λ be the union of two unit intervals
More informationCLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES
Bull Korean Math Soc 45 (2008), No 1, pp 95 99 CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES In-Jae Kim and Bryan L Shader Reprinted
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationDisplacement rank of the Drazin inverse
Available online at www.sciencedirect.com Journal of Computational and Applied Mathematics 167 (2004) 147 161 www.elsevier.com/locate/cam Displacement rank of the Drazin inverse Huaian Diao a, Yimin Wei
More information0 1 n. (d(v) = diag (z i )) in terms of the Fourier matrix F = [ω (i 1)(j 1) ] n ij=1 / n,
Let be a n n matrix and L some space of matrices of the same order, but of lower complexity This means, for instance in case is non singular, that solving linear systems Xz = b, X L, is much easier than
More informationThe antitriangular factorisation of saddle point matrices
The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block
More informationAccepted Manuscript. Multigrid Algorithm from Cyclic Reduction for Markovian Queueing Networks. Shu-Ling Yang, Jian-Feng Cai, Hai-Wei Sun
Accepted Manuscript Multigrid Algorithm from Cyclic Reduction for Markovian Queueing Networks Shu-Ling Yang, Jian-Feng Cai, Hai-Wei Sun PII: S0096-3003()0050-0 DOI: 006/jamc20008 Reference: AMC 5730 To
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationarxiv: v3 [math.ra] 22 Aug 2014
arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationGENERAL ARTICLE Realm of Matrices
Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationCosine transform preconditioners for high resolution image reconstruction
Linear Algebra and its Applications 36 (000) 89 04 www.elsevier.com/locate/laa Cosine transform preconditioners for high resolution image reconstruction Michael K. Ng a,,, Raymond H. Chan b,,tonyf.chan
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationDiagonal pivoting methods for solving tridiagonal systems without interchanges
Diagonal pivoting methods for solving tridiagonal systems without interchanges Joseph Tyson, Jennifer Erway, and Roummel F. Marcia Department of Mathematics, Wake Forest University School of Natural Sciences,
More informationA Note on Spectra of Optimal and Superoptimal Preconditioned Matrices
A Note on Spectra of Optimal and Superoptimal Preconditioned Matrices Che-ManCHENG Xiao-QingJIN Seak-WengVONG WeiWANG March 2006 Revised, November 2006 Abstract In this short note, it is proved that given
More informationA CONVERGENCE RESULT FOR MATRIX RICCATI DIFFERENTIAL EQUATIONS ASSOCIATED WITH M-MATRICES
A CONVERGENCE RESULT FOR MATRX RCCAT DFFERENTAL EQUATONS ASSOCATED WTH M-MATRCES CHUN-HUA GUO AND BO YU Abstract. The initial value problem for a matrix Riccati differential equation associated with an
More informationThe semi-convergence of GSI method for singular saddle point problems
Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationCover Page. The handle holds various files of this Leiden University dissertation
Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationGroup inverse for the block matrix with two identical subblocks over skew fields
Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More informationEfficient Solvers for Stochastic Finite Element Saddle Point Problems
Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationFast and Robust Phase Retrieval
Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National
More informationConvexity of the Joint Numerical Range
Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationDeterminant evaluations for binary circulant matrices
Spec Matrices 014; :187 199 Research Article Open Access Christos Kravvaritis* Determinant evaluations for binary circulant matrices Abstract: Determinant formulas for special binary circulant matrices
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationModified Gauss Seidel type methods and Jacobi type methods for Z-matrices
Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,
More informationPreconditioning Techniques Analysis for CG Method
Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system
More informationTheorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k
I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More information